Test Report: KVM_Linux_crio 19112

                    
                      dd2bef9838819c5f0c455a2dd1fe411c3aadcb2e:2024-06-21:34987
                    
                

Test fail (21/203)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-299362 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-299362 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.959500559s)

                                                
                                                
-- stdout --
	* [addons-299362] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-299362" primary control-plane node in "addons-299362" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	  - Using image docker.io/marcnuri/yakd:0.0.4
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image docker.io/registry:2.8.3
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image ghcr.io/helm/tiller:v2.17.0
	* Verifying registry addon...
	* Verifying ingress addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-299362 service yakd-dashboard -n yakd-dashboard
	
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-299362 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 17:41:42.697886   15966 out.go:291] Setting OutFile to fd 1 ...
	I0621 17:41:42.698133   15966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 17:41:42.698142   15966 out.go:304] Setting ErrFile to fd 2...
	I0621 17:41:42.698148   15966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 17:41:42.698337   15966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 17:41:42.698939   15966 out.go:298] Setting JSON to false
	I0621 17:41:42.699713   15966 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1401,"bootTime":1718990302,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 17:41:42.699771   15966 start.go:139] virtualization: kvm guest
	I0621 17:41:42.701917   15966 out.go:177] * [addons-299362] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 17:41:42.703374   15966 notify.go:220] Checking for updates...
	I0621 17:41:42.703404   15966 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 17:41:42.704766   15966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 17:41:42.706466   15966 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 17:41:42.707871   15966 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 17:41:42.709170   15966 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 17:41:42.710471   15966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 17:41:42.711846   15966 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 17:41:42.743225   15966 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 17:41:42.744547   15966 start.go:297] selected driver: kvm2
	I0621 17:41:42.744566   15966 start.go:901] validating driver "kvm2" against <nil>
	I0621 17:41:42.744578   15966 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 17:41:42.745243   15966 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 17:41:42.745325   15966 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 17:41:42.759803   15966 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 17:41:42.759856   15966 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 17:41:42.760085   15966 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 17:41:42.760159   15966 cni.go:84] Creating CNI manager for ""
	I0621 17:41:42.760175   15966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 17:41:42.760188   15966 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0621 17:41:42.760242   15966 start.go:340] cluster config:
	{Name:addons-299362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-299362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 17:41:42.760357   15966 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 17:41:42.762351   15966 out.go:177] * Starting "addons-299362" primary control-plane node in "addons-299362" cluster
	I0621 17:41:42.763759   15966 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 17:41:42.763796   15966 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 17:41:42.763809   15966 cache.go:56] Caching tarball of preloaded images
	I0621 17:41:42.763882   15966 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 17:41:42.763894   15966 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 17:41:42.764196   15966 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/config.json ...
	I0621 17:41:42.764230   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/config.json: {Name:mk3104766e101924b56e7fa74aaf0b48c56c4c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:41:42.764389   15966 start.go:360] acquireMachinesLock for addons-299362: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 17:41:42.764449   15966 start.go:364] duration metric: took 44.186µs to acquireMachinesLock for "addons-299362"
	I0621 17:41:42.764472   15966 start.go:93] Provisioning new machine with config: &{Name:addons-299362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-299362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 17:41:42.764538   15966 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 17:41:42.766394   15966 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0621 17:41:42.766531   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:41:42.766583   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:41:42.780798   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
	I0621 17:41:42.781254   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:41:42.781769   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:41:42.781792   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:41:42.782161   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:41:42.782313   15966 main.go:141] libmachine: (addons-299362) Calling .GetMachineName
	I0621 17:41:42.782443   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:41:42.782578   15966 start.go:159] libmachine.API.Create for "addons-299362" (driver="kvm2")
	I0621 17:41:42.782610   15966 client.go:168] LocalClient.Create starting
	I0621 17:41:42.782653   15966 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 17:41:42.876803   15966 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 17:41:43.080193   15966 main.go:141] libmachine: Running pre-create checks...
	I0621 17:41:43.080218   15966 main.go:141] libmachine: (addons-299362) Calling .PreCreateCheck
	I0621 17:41:43.080689   15966 main.go:141] libmachine: (addons-299362) Calling .GetConfigRaw
	I0621 17:41:43.081132   15966 main.go:141] libmachine: Creating machine...
	I0621 17:41:43.081147   15966 main.go:141] libmachine: (addons-299362) Calling .Create
	I0621 17:41:43.081326   15966 main.go:141] libmachine: (addons-299362) Creating KVM machine...
	I0621 17:41:43.082581   15966 main.go:141] libmachine: (addons-299362) DBG | found existing default KVM network
	I0621 17:41:43.083286   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:43.083149   15988 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0621 17:41:43.083392   15966 main.go:141] libmachine: (addons-299362) DBG | created network xml: 
	I0621 17:41:43.083424   15966 main.go:141] libmachine: (addons-299362) DBG | <network>
	I0621 17:41:43.083432   15966 main.go:141] libmachine: (addons-299362) DBG |   <name>mk-addons-299362</name>
	I0621 17:41:43.083437   15966 main.go:141] libmachine: (addons-299362) DBG |   <dns enable='no'/>
	I0621 17:41:43.083444   15966 main.go:141] libmachine: (addons-299362) DBG |   
	I0621 17:41:43.083450   15966 main.go:141] libmachine: (addons-299362) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 17:41:43.083457   15966 main.go:141] libmachine: (addons-299362) DBG |     <dhcp>
	I0621 17:41:43.083465   15966 main.go:141] libmachine: (addons-299362) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 17:41:43.083472   15966 main.go:141] libmachine: (addons-299362) DBG |     </dhcp>
	I0621 17:41:43.083479   15966 main.go:141] libmachine: (addons-299362) DBG |   </ip>
	I0621 17:41:43.083484   15966 main.go:141] libmachine: (addons-299362) DBG |   
	I0621 17:41:43.083489   15966 main.go:141] libmachine: (addons-299362) DBG | </network>
	I0621 17:41:43.083496   15966 main.go:141] libmachine: (addons-299362) DBG | 
	I0621 17:41:43.088859   15966 main.go:141] libmachine: (addons-299362) DBG | trying to create private KVM network mk-addons-299362 192.168.39.0/24...
	I0621 17:41:43.151649   15966 main.go:141] libmachine: (addons-299362) DBG | private KVM network mk-addons-299362 192.168.39.0/24 created
	I0621 17:41:43.151671   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:43.151639   15988 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 17:41:43.151683   15966 main.go:141] libmachine: (addons-299362) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362 ...
	I0621 17:41:43.151723   15966 main.go:141] libmachine: (addons-299362) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 17:41:43.151810   15966 main.go:141] libmachine: (addons-299362) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 17:41:43.398821   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:43.398697   15988 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa...
	I0621 17:41:43.492201   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:43.492070   15988 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/addons-299362.rawdisk...
	I0621 17:41:43.492231   15966 main.go:141] libmachine: (addons-299362) DBG | Writing magic tar header
	I0621 17:41:43.492242   15966 main.go:141] libmachine: (addons-299362) DBG | Writing SSH key tar header
	I0621 17:41:43.492250   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:43.492214   15988 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362 ...
	I0621 17:41:43.492356   15966 main.go:141] libmachine: (addons-299362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362
	I0621 17:41:43.492383   15966 main.go:141] libmachine: (addons-299362) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362 (perms=drwx------)
	I0621 17:41:43.492396   15966 main.go:141] libmachine: (addons-299362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 17:41:43.492408   15966 main.go:141] libmachine: (addons-299362) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 17:41:43.492425   15966 main.go:141] libmachine: (addons-299362) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 17:41:43.492434   15966 main.go:141] libmachine: (addons-299362) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 17:41:43.492444   15966 main.go:141] libmachine: (addons-299362) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 17:41:43.492451   15966 main.go:141] libmachine: (addons-299362) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 17:41:43.492457   15966 main.go:141] libmachine: (addons-299362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 17:41:43.492466   15966 main.go:141] libmachine: (addons-299362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 17:41:43.492472   15966 main.go:141] libmachine: (addons-299362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 17:41:43.492480   15966 main.go:141] libmachine: (addons-299362) DBG | Checking permissions on dir: /home/jenkins
	I0621 17:41:43.492486   15966 main.go:141] libmachine: (addons-299362) DBG | Checking permissions on dir: /home
	I0621 17:41:43.492492   15966 main.go:141] libmachine: (addons-299362) Creating domain...
	I0621 17:41:43.492497   15966 main.go:141] libmachine: (addons-299362) DBG | Skipping /home - not owner
	I0621 17:41:43.493456   15966 main.go:141] libmachine: (addons-299362) define libvirt domain using xml: 
	I0621 17:41:43.493481   15966 main.go:141] libmachine: (addons-299362) <domain type='kvm'>
	I0621 17:41:43.493492   15966 main.go:141] libmachine: (addons-299362)   <name>addons-299362</name>
	I0621 17:41:43.493503   15966 main.go:141] libmachine: (addons-299362)   <memory unit='MiB'>4000</memory>
	I0621 17:41:43.493593   15966 main.go:141] libmachine: (addons-299362)   <vcpu>2</vcpu>
	I0621 17:41:43.493627   15966 main.go:141] libmachine: (addons-299362)   <features>
	I0621 17:41:43.493639   15966 main.go:141] libmachine: (addons-299362)     <acpi/>
	I0621 17:41:43.493648   15966 main.go:141] libmachine: (addons-299362)     <apic/>
	I0621 17:41:43.493656   15966 main.go:141] libmachine: (addons-299362)     <pae/>
	I0621 17:41:43.493662   15966 main.go:141] libmachine: (addons-299362)     
	I0621 17:41:43.493670   15966 main.go:141] libmachine: (addons-299362)   </features>
	I0621 17:41:43.493678   15966 main.go:141] libmachine: (addons-299362)   <cpu mode='host-passthrough'>
	I0621 17:41:43.493684   15966 main.go:141] libmachine: (addons-299362)   
	I0621 17:41:43.493692   15966 main.go:141] libmachine: (addons-299362)   </cpu>
	I0621 17:41:43.493701   15966 main.go:141] libmachine: (addons-299362)   <os>
	I0621 17:41:43.493710   15966 main.go:141] libmachine: (addons-299362)     <type>hvm</type>
	I0621 17:41:43.493725   15966 main.go:141] libmachine: (addons-299362)     <boot dev='cdrom'/>
	I0621 17:41:43.493742   15966 main.go:141] libmachine: (addons-299362)     <boot dev='hd'/>
	I0621 17:41:43.493752   15966 main.go:141] libmachine: (addons-299362)     <bootmenu enable='no'/>
	I0621 17:41:43.493762   15966 main.go:141] libmachine: (addons-299362)   </os>
	I0621 17:41:43.493771   15966 main.go:141] libmachine: (addons-299362)   <devices>
	I0621 17:41:43.493776   15966 main.go:141] libmachine: (addons-299362)     <disk type='file' device='cdrom'>
	I0621 17:41:43.493785   15966 main.go:141] libmachine: (addons-299362)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/boot2docker.iso'/>
	I0621 17:41:43.493793   15966 main.go:141] libmachine: (addons-299362)       <target dev='hdc' bus='scsi'/>
	I0621 17:41:43.493824   15966 main.go:141] libmachine: (addons-299362)       <readonly/>
	I0621 17:41:43.493835   15966 main.go:141] libmachine: (addons-299362)     </disk>
	I0621 17:41:43.493853   15966 main.go:141] libmachine: (addons-299362)     <disk type='file' device='disk'>
	I0621 17:41:43.493871   15966 main.go:141] libmachine: (addons-299362)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 17:41:43.493890   15966 main.go:141] libmachine: (addons-299362)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/addons-299362.rawdisk'/>
	I0621 17:41:43.493904   15966 main.go:141] libmachine: (addons-299362)       <target dev='hda' bus='virtio'/>
	I0621 17:41:43.493912   15966 main.go:141] libmachine: (addons-299362)     </disk>
	I0621 17:41:43.493920   15966 main.go:141] libmachine: (addons-299362)     <interface type='network'>
	I0621 17:41:43.493926   15966 main.go:141] libmachine: (addons-299362)       <source network='mk-addons-299362'/>
	I0621 17:41:43.493933   15966 main.go:141] libmachine: (addons-299362)       <model type='virtio'/>
	I0621 17:41:43.493938   15966 main.go:141] libmachine: (addons-299362)     </interface>
	I0621 17:41:43.493947   15966 main.go:141] libmachine: (addons-299362)     <interface type='network'>
	I0621 17:41:43.493970   15966 main.go:141] libmachine: (addons-299362)       <source network='default'/>
	I0621 17:41:43.493988   15966 main.go:141] libmachine: (addons-299362)       <model type='virtio'/>
	I0621 17:41:43.493999   15966 main.go:141] libmachine: (addons-299362)     </interface>
	I0621 17:41:43.494014   15966 main.go:141] libmachine: (addons-299362)     <serial type='pty'>
	I0621 17:41:43.494026   15966 main.go:141] libmachine: (addons-299362)       <target port='0'/>
	I0621 17:41:43.494038   15966 main.go:141] libmachine: (addons-299362)     </serial>
	I0621 17:41:43.494048   15966 main.go:141] libmachine: (addons-299362)     <console type='pty'>
	I0621 17:41:43.494064   15966 main.go:141] libmachine: (addons-299362)       <target type='serial' port='0'/>
	I0621 17:41:43.494076   15966 main.go:141] libmachine: (addons-299362)     </console>
	I0621 17:41:43.494084   15966 main.go:141] libmachine: (addons-299362)     <rng model='virtio'>
	I0621 17:41:43.494110   15966 main.go:141] libmachine: (addons-299362)       <backend model='random'>/dev/random</backend>
	I0621 17:41:43.494121   15966 main.go:141] libmachine: (addons-299362)     </rng>
	I0621 17:41:43.494132   15966 main.go:141] libmachine: (addons-299362)     
	I0621 17:41:43.494142   15966 main.go:141] libmachine: (addons-299362)     
	I0621 17:41:43.494159   15966 main.go:141] libmachine: (addons-299362)   </devices>
	I0621 17:41:43.494170   15966 main.go:141] libmachine: (addons-299362) </domain>
	I0621 17:41:43.494178   15966 main.go:141] libmachine: (addons-299362) 
	I0621 17:41:43.500417   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4d:41:de in network default
	I0621 17:41:43.500929   15966 main.go:141] libmachine: (addons-299362) Ensuring networks are active...
	I0621 17:41:43.500944   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:43.501623   15966 main.go:141] libmachine: (addons-299362) Ensuring network default is active
	I0621 17:41:43.501987   15966 main.go:141] libmachine: (addons-299362) Ensuring network mk-addons-299362 is active
	I0621 17:41:43.502457   15966 main.go:141] libmachine: (addons-299362) Getting domain xml...
	I0621 17:41:43.503154   15966 main.go:141] libmachine: (addons-299362) Creating domain...
	I0621 17:41:44.900861   15966 main.go:141] libmachine: (addons-299362) Waiting to get IP...
	I0621 17:41:44.901638   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:44.902134   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:44.902161   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:44.902094   15988 retry.go:31] will retry after 238.611118ms: waiting for machine to come up
	I0621 17:41:45.142597   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:45.143212   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:45.143242   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:45.143163   15988 retry.go:31] will retry after 338.04846ms: waiting for machine to come up
	I0621 17:41:45.482659   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:45.483077   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:45.483125   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:45.483030   15988 retry.go:31] will retry after 376.541147ms: waiting for machine to come up
	I0621 17:41:45.861654   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:45.862080   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:45.862098   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:45.862041   15988 retry.go:31] will retry after 591.404874ms: waiting for machine to come up
	I0621 17:41:46.455020   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:46.455416   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:46.455449   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:46.455373   15988 retry.go:31] will retry after 571.185737ms: waiting for machine to come up
	I0621 17:41:47.028301   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:47.028770   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:47.028798   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:47.028701   15988 retry.go:31] will retry after 860.413457ms: waiting for machine to come up
	I0621 17:41:47.890378   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:47.890711   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:47.890734   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:47.890681   15988 retry.go:31] will retry after 863.058243ms: waiting for machine to come up
	I0621 17:41:48.755183   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:48.755540   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:48.755565   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:48.755475   15988 retry.go:31] will retry after 1.274888046s: waiting for machine to come up
	I0621 17:41:50.031810   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:50.032129   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:50.032157   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:50.032081   15988 retry.go:31] will retry after 1.804894555s: waiting for machine to come up
	I0621 17:41:51.839091   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:51.839448   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:51.839475   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:51.839404   15988 retry.go:31] will retry after 1.815264751s: waiting for machine to come up
	I0621 17:41:53.656502   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:53.656947   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:53.656971   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:53.656897   15988 retry.go:31] will retry after 1.771283428s: waiting for machine to come up
	I0621 17:41:55.430864   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:55.431402   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:55.431432   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:55.431354   15988 retry.go:31] will retry after 2.241154508s: waiting for machine to come up
	I0621 17:41:57.676738   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:41:57.678446   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find current IP address of domain addons-299362 in network mk-addons-299362
	I0621 17:41:57.678471   15966 main.go:141] libmachine: (addons-299362) DBG | I0621 17:41:57.678382   15988 retry.go:31] will retry after 4.444265201s: waiting for machine to come up
	I0621 17:42:02.127044   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:02.127524   15966 main.go:141] libmachine: (addons-299362) Found IP for machine: 192.168.39.187
	I0621 17:42:02.127549   15966 main.go:141] libmachine: (addons-299362) Reserving static IP address...
	I0621 17:42:02.127560   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has current primary IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:02.127897   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find host DHCP lease matching {name: "addons-299362", mac: "52:54:00:4a:bb:14", ip: "192.168.39.187"} in network mk-addons-299362
	I0621 17:42:02.204142   15966 main.go:141] libmachine: (addons-299362) DBG | Getting to WaitForSSH function...
	I0621 17:42:02.204168   15966 main.go:141] libmachine: (addons-299362) Reserved static IP address: 192.168.39.187
	I0621 17:42:02.204178   15966 main.go:141] libmachine: (addons-299362) Waiting for SSH to be available...
	I0621 17:42:02.206797   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:02.207083   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362
	I0621 17:42:02.207110   15966 main.go:141] libmachine: (addons-299362) DBG | unable to find defined IP address of network mk-addons-299362 interface with MAC address 52:54:00:4a:bb:14
	I0621 17:42:02.207218   15966 main.go:141] libmachine: (addons-299362) DBG | Using SSH client type: external
	I0621 17:42:02.207236   15966 main.go:141] libmachine: (addons-299362) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa (-rw-------)
	I0621 17:42:02.207295   15966 main.go:141] libmachine: (addons-299362) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 17:42:02.207319   15966 main.go:141] libmachine: (addons-299362) DBG | About to run SSH command:
	I0621 17:42:02.207337   15966 main.go:141] libmachine: (addons-299362) DBG | exit 0
	I0621 17:42:02.219474   15966 main.go:141] libmachine: (addons-299362) DBG | SSH cmd err, output: exit status 255: 
	I0621 17:42:02.219502   15966 main.go:141] libmachine: (addons-299362) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 17:42:02.219510   15966 main.go:141] libmachine: (addons-299362) DBG | command : exit 0
	I0621 17:42:02.219515   15966 main.go:141] libmachine: (addons-299362) DBG | err     : exit status 255
	I0621 17:42:02.219530   15966 main.go:141] libmachine: (addons-299362) DBG | output  : 
	I0621 17:42:05.221513   15966 main.go:141] libmachine: (addons-299362) DBG | Getting to WaitForSSH function...
	I0621 17:42:05.223788   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.224187   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:05.224233   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.224293   15966 main.go:141] libmachine: (addons-299362) DBG | Using SSH client type: external
	I0621 17:42:05.224331   15966 main.go:141] libmachine: (addons-299362) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa (-rw-------)
	I0621 17:42:05.224362   15966 main.go:141] libmachine: (addons-299362) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 17:42:05.224380   15966 main.go:141] libmachine: (addons-299362) DBG | About to run SSH command:
	I0621 17:42:05.224395   15966 main.go:141] libmachine: (addons-299362) DBG | exit 0
	I0621 17:42:05.345772   15966 main.go:141] libmachine: (addons-299362) DBG | SSH cmd err, output: <nil>: 
	I0621 17:42:05.346107   15966 main.go:141] libmachine: (addons-299362) KVM machine creation complete!
	I0621 17:42:05.346383   15966 main.go:141] libmachine: (addons-299362) Calling .GetConfigRaw
	I0621 17:42:05.346948   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:05.347140   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:05.347292   15966 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 17:42:05.347307   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:05.348662   15966 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 17:42:05.348674   15966 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 17:42:05.348679   15966 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 17:42:05.348686   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:05.351027   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.351393   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:05.351415   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.351572   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:05.351755   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:05.351915   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:05.352052   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:05.352231   15966 main.go:141] libmachine: Using SSH client type: native
	I0621 17:42:05.352452   15966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0621 17:42:05.352465   15966 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 17:42:05.449050   15966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 17:42:05.449075   15966 main.go:141] libmachine: Detecting the provisioner...
	I0621 17:42:05.449083   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:05.451822   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.452184   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:05.452220   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.452418   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:05.452628   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:05.452889   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:05.453085   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:05.453355   15966 main.go:141] libmachine: Using SSH client type: native
	I0621 17:42:05.453517   15966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0621 17:42:05.453551   15966 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 17:42:05.550254   15966 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 17:42:05.550324   15966 main.go:141] libmachine: found compatible host: buildroot
	I0621 17:42:05.550331   15966 main.go:141] libmachine: Provisioning with buildroot...
	I0621 17:42:05.550338   15966 main.go:141] libmachine: (addons-299362) Calling .GetMachineName
	I0621 17:42:05.550615   15966 buildroot.go:166] provisioning hostname "addons-299362"
	I0621 17:42:05.550643   15966 main.go:141] libmachine: (addons-299362) Calling .GetMachineName
	I0621 17:42:05.550845   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:05.553272   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.553636   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:05.553669   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.553822   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:05.553997   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:05.554190   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:05.554323   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:05.554507   15966 main.go:141] libmachine: Using SSH client type: native
	I0621 17:42:05.554681   15966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0621 17:42:05.554694   15966 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-299362 && echo "addons-299362" | sudo tee /etc/hostname
	I0621 17:42:05.669095   15966 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-299362
	
	I0621 17:42:05.669119   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:05.671905   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.672218   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:05.672244   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.672414   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:05.672601   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:05.672778   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:05.672964   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:05.673168   15966 main.go:141] libmachine: Using SSH client type: native
	I0621 17:42:05.673330   15966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0621 17:42:05.673346   15966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-299362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-299362/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-299362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 17:42:05.777897   15966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 17:42:05.777944   15966 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 17:42:05.777986   15966 buildroot.go:174] setting up certificates
	I0621 17:42:05.777996   15966 provision.go:84] configureAuth start
	I0621 17:42:05.778008   15966 main.go:141] libmachine: (addons-299362) Calling .GetMachineName
	I0621 17:42:05.778307   15966 main.go:141] libmachine: (addons-299362) Calling .GetIP
	I0621 17:42:05.781090   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.781442   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:05.781468   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.781624   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:05.784177   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.784569   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:05.784593   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.784702   15966 provision.go:143] copyHostCerts
	I0621 17:42:05.784787   15966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 17:42:05.784895   15966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 17:42:05.784948   15966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 17:42:05.784994   15966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.addons-299362 san=[127.0.0.1 192.168.39.187 addons-299362 localhost minikube]
	I0621 17:42:05.946724   15966 provision.go:177] copyRemoteCerts
	I0621 17:42:05.946787   15966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 17:42:05.946809   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:05.949599   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.949946   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:05.949972   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:05.950186   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:05.950399   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:05.950559   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:05.950735   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:06.027638   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 17:42:06.051710   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 17:42:06.075118   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 17:42:06.099592   15966 provision.go:87] duration metric: took 321.579831ms to configureAuth
	I0621 17:42:06.099629   15966 buildroot.go:189] setting minikube options for container-runtime
	I0621 17:42:06.099849   15966 config.go:182] Loaded profile config "addons-299362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 17:42:06.099933   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:06.102591   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.102877   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:06.102905   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.103051   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:06.103251   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:06.103433   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:06.103556   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:06.103714   15966 main.go:141] libmachine: Using SSH client type: native
	I0621 17:42:06.103953   15966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0621 17:42:06.103974   15966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 17:42:06.352481   15966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 17:42:06.352517   15966 main.go:141] libmachine: Checking connection to Docker...
	I0621 17:42:06.352527   15966 main.go:141] libmachine: (addons-299362) Calling .GetURL
	I0621 17:42:06.354266   15966 main.go:141] libmachine: (addons-299362) DBG | Using libvirt version 6000000
	I0621 17:42:06.356398   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.356694   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:06.356717   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.356835   15966 main.go:141] libmachine: Docker is up and running!
	I0621 17:42:06.356848   15966 main.go:141] libmachine: Reticulating splines...
	I0621 17:42:06.356857   15966 client.go:171] duration metric: took 23.574236273s to LocalClient.Create
	I0621 17:42:06.356883   15966 start.go:167] duration metric: took 23.574306328s to libmachine.API.Create "addons-299362"
	I0621 17:42:06.356896   15966 start.go:293] postStartSetup for "addons-299362" (driver="kvm2")
	I0621 17:42:06.356910   15966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 17:42:06.356933   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:06.357231   15966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 17:42:06.357254   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:06.359398   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.359662   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:06.359685   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.359883   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:06.360053   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:06.360223   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:06.360353   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:06.440006   15966 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 17:42:06.444022   15966 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 17:42:06.444049   15966 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 17:42:06.444132   15966 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 17:42:06.444165   15966 start.go:296] duration metric: took 87.262372ms for postStartSetup
	I0621 17:42:06.444199   15966 main.go:141] libmachine: (addons-299362) Calling .GetConfigRaw
	I0621 17:42:06.444812   15966 main.go:141] libmachine: (addons-299362) Calling .GetIP
	I0621 17:42:06.447866   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.448192   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:06.448229   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.448441   15966 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/config.json ...
	I0621 17:42:06.448622   15966 start.go:128] duration metric: took 23.684073365s to createHost
	I0621 17:42:06.448646   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:06.450682   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.451022   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:06.451044   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.451174   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:06.451338   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:06.451506   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:06.451607   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:06.451749   15966 main.go:141] libmachine: Using SSH client type: native
	I0621 17:42:06.451939   15966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0621 17:42:06.451951   15966 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0621 17:42:06.550384   15966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718991726.526854340
	
	I0621 17:42:06.550407   15966 fix.go:216] guest clock: 1718991726.526854340
	I0621 17:42:06.550417   15966 fix.go:229] Guest: 2024-06-21 17:42:06.52685434 +0000 UTC Remote: 2024-06-21 17:42:06.448633994 +0000 UTC m=+23.785691881 (delta=78.220346ms)
	I0621 17:42:06.550468   15966 fix.go:200] guest clock delta is within tolerance: 78.220346ms
	I0621 17:42:06.550477   15966 start.go:83] releasing machines lock for "addons-299362", held for 23.78601619s
	I0621 17:42:06.550507   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:06.550801   15966 main.go:141] libmachine: (addons-299362) Calling .GetIP
	I0621 17:42:06.553730   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.554251   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:06.554282   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.554431   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:06.554901   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:06.555083   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:06.555170   15966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 17:42:06.555214   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:06.555294   15966 ssh_runner.go:195] Run: cat /version.json
	I0621 17:42:06.555313   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:06.558190   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.558562   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:06.558591   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.558617   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.558866   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:06.559033   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:06.559060   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:06.559120   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:06.559224   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:06.559293   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:06.559414   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:06.559424   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:06.559548   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:06.559678   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:06.630424   15966 ssh_runner.go:195] Run: systemctl --version
	I0621 17:42:06.667563   15966 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 17:42:06.825054   15966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 17:42:06.830805   15966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 17:42:06.830874   15966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 17:42:06.845883   15966 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 17:42:06.845906   15966 start.go:494] detecting cgroup driver to use...
	I0621 17:42:06.845986   15966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 17:42:06.861175   15966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 17:42:06.874543   15966 docker.go:217] disabling cri-docker service (if available) ...
	I0621 17:42:06.874613   15966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 17:42:06.887577   15966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 17:42:06.900941   15966 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 17:42:07.013240   15966 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 17:42:07.169491   15966 docker.go:233] disabling docker service ...
	I0621 17:42:07.169554   15966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 17:42:07.182782   15966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 17:42:07.194926   15966 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 17:42:07.319587   15966 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 17:42:07.434979   15966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 17:42:07.448229   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 17:42:07.465009   15966 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 17:42:07.465075   15966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 17:42:07.474335   15966 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 17:42:07.474402   15966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 17:42:07.483959   15966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 17:42:07.493204   15966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 17:42:07.502644   15966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 17:42:07.512376   15966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 17:42:07.521822   15966 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 17:42:07.537627   15966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 17:42:07.547237   15966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 17:42:07.556102   15966 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 17:42:07.556177   15966 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 17:42:07.569247   15966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 17:42:07.578625   15966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 17:42:07.697149   15966 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 17:42:07.822681   15966 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 17:42:07.822790   15966 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 17:42:07.827629   15966 start.go:562] Will wait 60s for crictl version
	I0621 17:42:07.827699   15966 ssh_runner.go:195] Run: which crictl
	I0621 17:42:07.831129   15966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 17:42:07.877170   15966 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 17:42:07.877294   15966 ssh_runner.go:195] Run: crio --version
	I0621 17:42:07.903177   15966 ssh_runner.go:195] Run: crio --version
	I0621 17:42:07.930861   15966 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 17:42:07.932187   15966 main.go:141] libmachine: (addons-299362) Calling .GetIP
	I0621 17:42:07.935178   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:07.935668   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:07.935689   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:07.935978   15966 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 17:42:07.939810   15966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 17:42:07.951471   15966 kubeadm.go:877] updating cluster {Name:addons-299362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-299362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 17:42:07.951569   15966 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 17:42:07.951615   15966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 17:42:07.982293   15966 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 17:42:07.982352   15966 ssh_runner.go:195] Run: which lz4
	I0621 17:42:07.985982   15966 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0621 17:42:07.989625   15966 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 17:42:07.989647   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 17:42:09.116456   15966 crio.go:462] duration metric: took 1.130505996s to copy over tarball
	I0621 17:42:09.116516   15966 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 17:42:11.299990   15966 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.183446688s)
	I0621 17:42:11.300017   15966 crio.go:469] duration metric: took 2.183535442s to extract the tarball
	I0621 17:42:11.300026   15966 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 17:42:11.336972   15966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 17:42:11.375991   15966 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 17:42:11.376019   15966 cache_images.go:84] Images are preloaded, skipping loading
	I0621 17:42:11.376030   15966 kubeadm.go:928] updating node { 192.168.39.187 8443 v1.30.2 crio true true} ...
	I0621 17:42:11.376135   15966 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-299362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-299362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 17:42:11.376224   15966 ssh_runner.go:195] Run: crio config
	I0621 17:42:11.423252   15966 cni.go:84] Creating CNI manager for ""
	I0621 17:42:11.423271   15966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 17:42:11.423281   15966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 17:42:11.423302   15966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-299362 NodeName:addons-299362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 17:42:11.423468   15966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-299362"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 17:42:11.423524   15966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 17:42:11.432845   15966 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 17:42:11.432906   15966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0621 17:42:11.441479   15966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0621 17:42:11.456654   15966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 17:42:11.471695   15966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0621 17:42:11.487164   15966 ssh_runner.go:195] Run: grep 192.168.39.187	control-plane.minikube.internal$ /etc/hosts
	I0621 17:42:11.490731   15966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 17:42:11.501740   15966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 17:42:11.617578   15966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 17:42:11.632752   15966 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362 for IP: 192.168.39.187
	I0621 17:42:11.632779   15966 certs.go:194] generating shared ca certs ...
	I0621 17:42:11.632799   15966 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:11.632965   15966 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 17:42:11.794161   15966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt ...
	I0621 17:42:11.794189   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt: {Name:mk8f2dfff48454b29c625265558b4d00edc690ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:11.794374   15966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key ...
	I0621 17:42:11.794389   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key: {Name:mk67f1b3dde9a892db0dbab81bfc255f3e3bedc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:11.794482   15966 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 17:42:11.985544   15966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt ...
	I0621 17:42:11.985579   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt: {Name:mk2b7bdc6bf617523b8e047d9866ba57f334d595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:11.985763   15966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key ...
	I0621 17:42:11.985782   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key: {Name:mk5a02b4978b399dbe0a14a04fef6daf646a5580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:11.985897   15966 certs.go:256] generating profile certs ...
	I0621 17:42:11.985950   15966 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/client.key
	I0621 17:42:11.985978   15966 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/client.crt with IP's: []
	I0621 17:42:12.056502   15966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/client.crt ...
	I0621 17:42:12.056537   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/client.crt: {Name:mk0076a9fb29f804270d559c33955cd82f5a7600 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:12.056723   15966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/client.key ...
	I0621 17:42:12.056737   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/client.key: {Name:mk8a16957c93491ace566c93bedc95891aab321f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:12.056831   15966 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.key.466fd825
	I0621 17:42:12.056854   15966 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.crt.466fd825 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187]
	I0621 17:42:12.183068   15966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.crt.466fd825 ...
	I0621 17:42:12.183099   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.crt.466fd825: {Name:mked50c174107c240b0f42a35ee64353d1791ac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:12.183279   15966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.key.466fd825 ...
	I0621 17:42:12.183296   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.key.466fd825: {Name:mkb20bb6af58ef03dc742b7fffb7f29ae14513d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:12.183388   15966 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.crt.466fd825 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.crt
	I0621 17:42:12.183477   15966 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.key.466fd825 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.key
	I0621 17:42:12.183545   15966 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/proxy-client.key
	I0621 17:42:12.183569   15966 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/proxy-client.crt with IP's: []
	I0621 17:42:12.281845   15966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/proxy-client.crt ...
	I0621 17:42:12.281892   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/proxy-client.crt: {Name:mkdc4b61d943ea7b2a3359cda29c1444c17430ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:12.282078   15966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/proxy-client.key ...
	I0621 17:42:12.282092   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/proxy-client.key: {Name:mk28c8edc46bcdac42f8658d6913f8cee02236df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:12.282418   15966 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 17:42:12.282471   15966 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 17:42:12.282505   15966 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 17:42:12.282539   15966 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 17:42:12.283194   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 17:42:12.306681   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 17:42:12.330465   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 17:42:12.357551   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 17:42:12.382724   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0621 17:42:12.405889   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 17:42:12.427791   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 17:42:12.449756   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/addons-299362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0621 17:42:12.471216   15966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 17:42:12.492423   15966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 17:42:12.507309   15966 ssh_runner.go:195] Run: openssl version
	I0621 17:42:12.512918   15966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 17:42:12.522678   15966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 17:42:12.526648   15966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 17:42:12.526698   15966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 17:42:12.531947   15966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 17:42:12.541993   15966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 17:42:12.546080   15966 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 17:42:12.546145   15966 kubeadm.go:391] StartCluster: {Name:addons-299362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-299362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 17:42:12.546222   15966 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 17:42:12.546285   15966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 17:42:12.580883   15966 cri.go:89] found id: ""
	I0621 17:42:12.580947   15966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 17:42:12.590101   15966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 17:42:12.599086   15966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 17:42:12.608171   15966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 17:42:12.608189   15966 kubeadm.go:156] found existing configuration files:
	
	I0621 17:42:12.608236   15966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 17:42:12.616771   15966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 17:42:12.616838   15966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 17:42:12.625823   15966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 17:42:12.634243   15966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 17:42:12.634298   15966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 17:42:12.642838   15966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 17:42:12.651102   15966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 17:42:12.651155   15966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 17:42:12.659760   15966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 17:42:12.668402   15966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 17:42:12.668447   15966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 17:42:12.677218   15966 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 17:42:12.858835   15966 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 17:42:23.032614   15966 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 17:42:23.032680   15966 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 17:42:23.032774   15966 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 17:42:23.032886   15966 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 17:42:23.033025   15966 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 17:42:23.033115   15966 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 17:42:23.034700   15966 out.go:204]   - Generating certificates and keys ...
	I0621 17:42:23.034792   15966 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 17:42:23.034880   15966 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 17:42:23.034986   15966 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 17:42:23.035057   15966 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 17:42:23.035131   15966 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 17:42:23.035204   15966 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 17:42:23.035280   15966 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 17:42:23.035437   15966 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-299362 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0621 17:42:23.035511   15966 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 17:42:23.035675   15966 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-299362 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0621 17:42:23.035767   15966 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 17:42:23.035858   15966 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 17:42:23.035919   15966 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 17:42:23.036101   15966 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 17:42:23.036229   15966 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 17:42:23.036309   15966 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 17:42:23.036382   15966 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 17:42:23.036468   15966 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 17:42:23.036542   15966 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 17:42:23.036641   15966 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 17:42:23.036729   15966 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 17:42:23.038201   15966 out.go:204]   - Booting up control plane ...
	I0621 17:42:23.038425   15966 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 17:42:23.038551   15966 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 17:42:23.038616   15966 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 17:42:23.038706   15966 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 17:42:23.038784   15966 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 17:42:23.038834   15966 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 17:42:23.039024   15966 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 17:42:23.039120   15966 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 17:42:23.039185   15966 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 500.923437ms
	I0621 17:42:23.039245   15966 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 17:42:23.039295   15966 kubeadm.go:309] [api-check] The API server is healthy after 5.002178212s
	I0621 17:42:23.039406   15966 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 17:42:23.039519   15966 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 17:42:23.039568   15966 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 17:42:23.039721   15966 kubeadm.go:309] [mark-control-plane] Marking the node addons-299362 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 17:42:23.039770   15966 kubeadm.go:309] [bootstrap-token] Using token: ovr80d.bfmxl3kesmj3l5ma
	I0621 17:42:23.040963   15966 out.go:204]   - Configuring RBAC rules ...
	I0621 17:42:23.041073   15966 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 17:42:23.041175   15966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 17:42:23.041367   15966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 17:42:23.041535   15966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 17:42:23.041681   15966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 17:42:23.041790   15966 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 17:42:23.041981   15966 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 17:42:23.042058   15966 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 17:42:23.042135   15966 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 17:42:23.042152   15966 kubeadm.go:309] 
	I0621 17:42:23.042246   15966 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 17:42:23.042264   15966 kubeadm.go:309] 
	I0621 17:42:23.042385   15966 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 17:42:23.042396   15966 kubeadm.go:309] 
	I0621 17:42:23.042443   15966 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 17:42:23.042516   15966 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 17:42:23.042560   15966 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 17:42:23.042568   15966 kubeadm.go:309] 
	I0621 17:42:23.042639   15966 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 17:42:23.042654   15966 kubeadm.go:309] 
	I0621 17:42:23.042731   15966 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 17:42:23.042745   15966 kubeadm.go:309] 
	I0621 17:42:23.042815   15966 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 17:42:23.042907   15966 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 17:42:23.042987   15966 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 17:42:23.042995   15966 kubeadm.go:309] 
	I0621 17:42:23.043074   15966 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 17:42:23.043167   15966 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 17:42:23.043179   15966 kubeadm.go:309] 
	I0621 17:42:23.043277   15966 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ovr80d.bfmxl3kesmj3l5ma \
	I0621 17:42:23.043410   15966 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 17:42:23.043432   15966 kubeadm.go:309] 	--control-plane 
	I0621 17:42:23.043445   15966 kubeadm.go:309] 
	I0621 17:42:23.043564   15966 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 17:42:23.043573   15966 kubeadm.go:309] 
	I0621 17:42:23.043677   15966 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ovr80d.bfmxl3kesmj3l5ma \
	I0621 17:42:23.043814   15966 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 17:42:23.043840   15966 cni.go:84] Creating CNI manager for ""
	I0621 17:42:23.043851   15966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 17:42:23.045257   15966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0621 17:42:23.046481   15966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0621 17:42:23.057341   15966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0621 17:42:23.075477   15966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 17:42:23.075592   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:23.075600   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-299362 minikube.k8s.io/updated_at=2024_06_21T17_42_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=addons-299362 minikube.k8s.io/primary=true
	I0621 17:42:23.090769   15966 ops.go:34] apiserver oom_adj: -16
	I0621 17:42:23.207909   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:23.708760   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:24.208930   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:24.708066   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:25.208584   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:25.708163   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:26.208056   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:26.708034   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:27.208146   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:27.708289   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:28.208986   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:28.708553   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:29.208089   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:29.708271   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:30.208284   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:30.708790   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:31.207961   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:31.708278   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:32.207930   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:32.708188   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:33.208002   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:33.708079   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:34.208664   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:34.708165   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:35.208882   15966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 17:42:35.300898   15966 kubeadm.go:1107] duration metric: took 12.225370332s to wait for elevateKubeSystemPrivileges
	W0621 17:42:35.300945   15966 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 17:42:35.300956   15966 kubeadm.go:393] duration metric: took 22.754815293s to StartCluster
	I0621 17:42:35.300979   15966 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:35.301110   15966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 17:42:35.301657   15966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 17:42:35.301904   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 17:42:35.301955   15966 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 17:42:35.302018   15966 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0621 17:42:35.302113   15966 addons.go:69] Setting yakd=true in profile "addons-299362"
	I0621 17:42:35.302133   15966 addons.go:69] Setting inspektor-gadget=true in profile "addons-299362"
	I0621 17:42:35.302157   15966 addons.go:69] Setting volcano=true in profile "addons-299362"
	I0621 17:42:35.302159   15966 addons.go:69] Setting storage-provisioner=true in profile "addons-299362"
	I0621 17:42:35.302178   15966 addons.go:234] Setting addon volcano=true in "addons-299362"
	I0621 17:42:35.302176   15966 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-299362"
	I0621 17:42:35.302216   15966 addons.go:69] Setting volumesnapshots=true in profile "addons-299362"
	I0621 17:42:35.302222   15966 addons.go:69] Setting registry=true in profile "addons-299362"
	I0621 17:42:35.302235   15966 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-299362"
	I0621 17:42:35.302238   15966 addons.go:234] Setting addon volumesnapshots=true in "addons-299362"
	I0621 17:42:35.302249   15966 addons.go:234] Setting addon registry=true in "addons-299362"
	I0621 17:42:35.302270   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.302270   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.302186   15966 addons.go:69] Setting cloud-spanner=true in profile "addons-299362"
	I0621 17:42:35.302289   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.302309   15966 addons.go:234] Setting addon cloud-spanner=true in "addons-299362"
	I0621 17:42:35.302219   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.302340   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.302179   15966 addons.go:234] Setting addon inspektor-gadget=true in "addons-299362"
	I0621 17:42:35.302421   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.302731   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.302740   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.302732   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.302758   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.302773   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.302779   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.302782   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.302766   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.302146   15966 addons.go:234] Setting addon yakd=true in "addons-299362"
	I0621 17:42:35.302797   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.302810   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.302196   15966 addons.go:69] Setting default-storageclass=true in profile "addons-299362"
	I0621 17:42:35.302192   15966 config.go:182] Loaded profile config "addons-299362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 17:42:35.302199   15966 addons.go:69] Setting gcp-auth=true in profile "addons-299362"
	I0621 17:42:35.302862   15966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-299362"
	I0621 17:42:35.302865   15966 mustload.go:65] Loading cluster: addons-299362
	I0621 17:42:35.302894   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.302955   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.303003   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.303012   15966 config.go:182] Loaded profile config "addons-299362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 17:42:35.302197   15966 addons.go:234] Setting addon storage-provisioner=true in "addons-299362"
	I0621 17:42:35.302204   15966 addons.go:69] Setting metrics-server=true in profile "addons-299362"
	I0621 17:42:35.303181   15966 addons.go:234] Setting addon metrics-server=true in "addons-299362"
	I0621 17:42:35.302204   15966 addons.go:69] Setting ingress-dns=true in profile "addons-299362"
	I0621 17:42:35.303209   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.303222   15966 addons.go:234] Setting addon ingress-dns=true in "addons-299362"
	I0621 17:42:35.303235   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.303252   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.302202   15966 addons.go:69] Setting ingress=true in profile "addons-299362"
	I0621 17:42:35.303322   15966 addons.go:234] Setting addon ingress=true in "addons-299362"
	I0621 17:42:35.303347   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.302208   15966 addons.go:69] Setting helm-tiller=true in profile "addons-299362"
	I0621 17:42:35.303398   15966 addons.go:234] Setting addon helm-tiller=true in "addons-299362"
	I0621 17:42:35.303430   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.303489   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.303566   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.303587   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.302151   15966 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-299362"
	I0621 17:42:35.303642   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.303650   15966 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-299362"
	I0621 17:42:35.303193   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.303656   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.303666   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.302191   15966 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-299362"
	I0621 17:42:35.303795   15966 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-299362"
	I0621 17:42:35.303839   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.303799   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.304124   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.304129   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.304143   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.304159   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.304192   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.304214   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.304234   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.304348   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.304234   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.304436   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.304401   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.305916   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.304328   15966 out.go:177] * Verifying Kubernetes components...
	I0621 17:42:35.315909   15966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 17:42:35.329867   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33809
	I0621 17:42:35.329960   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43769
	I0621 17:42:35.330065   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0621 17:42:35.330113   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I0621 17:42:35.330164   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0621 17:42:35.330501   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.330517   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.330847   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.330919   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.331019   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.331172   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.331182   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.331245   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.331268   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.331305   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.331315   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.331453   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.331469   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.331965   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.331981   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.332043   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.332084   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.332123   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.332161   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.332581   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.332611   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.333264   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.340237   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I0621 17:42:35.345949   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0621 17:42:35.346290   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.346311   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.346320   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.346346   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.346677   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.346714   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.348353   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.348379   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.348471   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I0621 17:42:35.348641   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.348920   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.349133   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.349153   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.349395   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.349417   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.349500   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.350041   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.350069   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.351504   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.351591   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.351823   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.352486   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.352505   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.353233   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.353407   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.356702   15966 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-299362"
	I0621 17:42:35.356741   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.357105   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.357134   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.358418   15966 addons.go:234] Setting addon default-storageclass=true in "addons-299362"
	I0621 17:42:35.358455   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.358699   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.358720   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.381789   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0621 17:42:35.382379   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.382881   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.382901   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.383301   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.383637   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.385117   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:35.385452   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.385495   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.388686   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45329
	I0621 17:42:35.389248   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.389874   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.389914   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.390289   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.390442   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.391396   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I0621 17:42:35.391602   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I0621 17:42:35.392029   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.392124   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.392523   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.392540   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.392655   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.392672   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.392896   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.393237   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.393419   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.393462   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.393505   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.395146   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.395762   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
	I0621 17:42:35.395896   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.396433   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.396986   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.397009   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.397333   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.397712   15966 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0621 17:42:35.397713   15966 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0621 17:42:35.397875   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.397915   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.398196   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44285
	I0621 17:42:35.398665   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.399131   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.399139   15966 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0621 17:42:35.399150   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0621 17:42:35.399151   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.399165   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.399169   15966 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0621 17:42:35.399182   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0621 17:42:35.399199   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.399465   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.399631   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.401544   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.403164   15966 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0621 17:42:35.403308   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I0621 17:42:35.403559   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36131
	I0621 17:42:35.403873   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.404296   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.404465   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.404476   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.404565   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.404838   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.404894   15966 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0621 17:42:35.404911   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0621 17:42:35.404936   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.405014   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.405463   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.405779   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.405793   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.405984   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.406010   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.406083   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.406299   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.406323   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.406350   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.406483   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.406916   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.406958   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.407160   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.407183   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.407362   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.407519   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.407544   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.407685   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.407862   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.409646   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.409819   15966 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0621 17:42:35.410191   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.410210   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.410502   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.410523   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0621 17:42:35.410543   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0621 17:42:35.410800   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.410883   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.410958   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.411011   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.411158   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.411191   15966 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0621 17:42:35.411208   15966 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0621 17:42:35.411226   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.411389   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.411403   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.411516   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.411534   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.411688   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.412185   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.412216   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.412357   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.412573   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.415179   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.415782   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:35.415799   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:35.415783   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.417453   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0621 17:42:35.417556   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:35.417571   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:35.417582   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:35.417592   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:35.417607   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:35.417870   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:35.417879   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:35.417892   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:35.417949   15966 main.go:141] libmachine: () Calling .GetVersion
	W0621 17:42:35.417974   15966 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0621 17:42:35.418417   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.418433   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.418502   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.418514   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.418683   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.418854   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.419012   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.419162   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.419156   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.419242   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38303
	I0621 17:42:35.419765   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.419843   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.419885   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.423483   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I0621 17:42:35.424014   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.424403   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0621 17:42:35.424484   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.424498   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.424766   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.425098   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.425178   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.425192   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.425522   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.425607   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.425644   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.426033   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.426058   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.426501   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.426518   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.427032   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.427627   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.427651   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.428342   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46061
	I0621 17:42:35.428754   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.429309   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.429325   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.429667   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.429916   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.437598   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0621 17:42:35.438131   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.438579   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.438594   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.438897   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.439067   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.440783   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0621 17:42:35.441181   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.441960   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.442492   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.442508   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.443153   15966 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0621 17:42:35.443582   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.443637   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0621 17:42:35.443905   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.444056   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.444292   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.444305   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.444582   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.444742   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.445828   15966 out.go:177]   - Using image docker.io/busybox:stable
	I0621 17:42:35.446388   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.446913   15966 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0621 17:42:35.446931   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0621 17:42:35.446953   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.446967   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.447184   15966 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 17:42:35.447197   15966 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 17:42:35.447213   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.447914   15966 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0621 17:42:35.449152   15966 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0621 17:42:35.449170   15966 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0621 17:42:35.449190   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.451656   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.452157   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.452177   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.452315   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.452447   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.453114   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.453160   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.453168   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.453490   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.453706   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.453714   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.453725   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.453742   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.454074   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.454142   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.454296   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.454345   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.454469   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.454520   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.454619   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.454623   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.456485   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0621 17:42:35.456900   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.457078   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
	I0621 17:42:35.457629   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.457650   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.457675   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.458012   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.458172   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.458823   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.458849   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.460178   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.460211   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0621 17:42:35.460181   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.460427   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46387
	I0621 17:42:35.460602   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.460863   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.461070   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.461561   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.461584   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.461901   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.461918   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.462116   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.462200   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.462308   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.462419   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.462492   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.462634   15966 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0621 17:42:35.463536   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0621 17:42:35.463802   15966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 17:42:35.463876   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.464301   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.464500   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.464515   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.464555   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.464797   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.464856   15966 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0621 17:42:35.464954   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.464982   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0621 17:42:35.465009   15966 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 17:42:35.465586   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 17:42:35.465604   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.465895   15966 out.go:177]   - Using image docker.io/registry:2.8.3
	I0621 17:42:35.466017   15966 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0621 17:42:35.466072   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.466910   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.467143   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.467292   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.467647   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.468027   15966 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0621 17:42:35.468286   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:35.468576   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:35.468830   15966 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0621 17:42:35.468846   15966 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0621 17:42:35.468854   15966 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0621 17:42:35.469385   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.469725   15966 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0621 17:42:35.469742   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0621 17:42:35.469758   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.469827   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.469857   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.470637   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.470777   15966 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0621 17:42:35.470785   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.470792   15966 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0621 17:42:35.470811   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.470892   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.470961   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.471658   15966 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0621 17:42:35.471672   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0621 17:42:35.471896   15966 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0621 17:42:35.472017   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.473873   15966 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0621 17:42:35.474553   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.474936   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.474969   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.474980   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.475264   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.475470   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.475566   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.475580   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.475856   15966 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0621 17:42:35.476052   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.475874   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.476223   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.476478   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.476519   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.476684   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.476901   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.477118   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.477134   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.477158   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.477309   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.477454   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.477531   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
	I0621 17:42:35.477581   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.478117   15966 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0621 17:42:35.478304   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.478747   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.478764   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.479031   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.479216   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.480556   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.480673   15966 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0621 17:42:35.481697   15966 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0621 17:42:35.481722   15966 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0621 17:42:35.483059   15966 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0621 17:42:35.483077   15966 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0621 17:42:35.483095   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.483095   15966 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0621 17:42:35.483110   15966 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0621 17:42:35.483138   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.486547   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.486915   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.486943   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.486956   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.487085   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.487234   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.487498   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.487519   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.487536   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.487610   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.487742   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.487852   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.487943   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.488056   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.489931   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0621 17:42:35.510219   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:35.510826   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:35.510855   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:35.511182   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:35.511380   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:35.512868   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:35.514677   15966 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0621 17:42:35.516017   15966 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0621 17:42:35.516033   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0621 17:42:35.516051   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:35.518841   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.519196   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:35.519222   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:35.519364   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:35.519503   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:35.519644   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:35.519739   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:35.777950   15966 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0621 17:42:35.777982   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0621 17:42:35.841686   15966 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0621 17:42:35.841716   15966 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0621 17:42:35.843168   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0621 17:42:35.851434   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0621 17:42:35.861466   15966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 17:42:35.861521   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 17:42:35.864025   15966 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0621 17:42:35.864045   15966 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0621 17:42:35.893667   15966 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0621 17:42:35.893692   15966 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0621 17:42:35.893934   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0621 17:42:36.001987   15966 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0621 17:42:36.002018   15966 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0621 17:42:36.008175   15966 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0621 17:42:36.008202   15966 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0621 17:42:36.029017   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0621 17:42:36.030664   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0621 17:42:36.038797   15966 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0621 17:42:36.038814   15966 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0621 17:42:36.052456   15966 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0621 17:42:36.052473   15966 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0621 17:42:36.059269   15966 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0621 17:42:36.059285   15966 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0621 17:42:36.100976   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 17:42:36.107674   15966 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0621 17:42:36.107695   15966 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0621 17:42:36.146869   15966 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0621 17:42:36.146894   15966 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0621 17:42:36.164438   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 17:42:36.187897   15966 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0621 17:42:36.187917   15966 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0621 17:42:36.193547   15966 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0621 17:42:36.193569   15966 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0621 17:42:36.258007   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0621 17:42:36.271592   15966 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0621 17:42:36.271614   15966 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0621 17:42:36.338176   15966 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0621 17:42:36.338200   15966 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0621 17:42:36.347921   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0621 17:42:36.349228   15966 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0621 17:42:36.349245   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0621 17:42:36.375607   15966 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0621 17:42:36.375641   15966 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0621 17:42:36.431021   15966 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0621 17:42:36.431039   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0621 17:42:36.468768   15966 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0621 17:42:36.468795   15966 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0621 17:42:36.552898   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0621 17:42:36.636625   15966 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0621 17:42:36.636650   15966 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0621 17:42:36.647164   15966 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0621 17:42:36.647184   15966 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0621 17:42:36.660558   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0621 17:42:36.752035   15966 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0621 17:42:36.752065   15966 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0621 17:42:36.860642   15966 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0621 17:42:36.860670   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0621 17:42:36.946586   15966 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0621 17:42:36.946620   15966 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0621 17:42:36.950876   15966 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0621 17:42:36.950907   15966 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0621 17:42:37.261381   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0621 17:42:37.276582   15966 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0621 17:42:37.276607   15966 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0621 17:42:37.288084   15966 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0621 17:42:37.288114   15966 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0621 17:42:37.483889   15966 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0621 17:42:37.483908   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0621 17:42:37.512134   15966 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0621 17:42:37.512155   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0621 17:42:37.843246   15966 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0621 17:42:37.843271   15966 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0621 17:42:37.846023   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0621 17:42:38.117016   15966 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0621 17:42:38.117042   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0621 17:42:38.343086   15966 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0621 17:42:38.343108   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0621 17:42:38.687542   15966 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0621 17:42:38.687566   15966 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0621 17:42:38.956464   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0621 17:42:40.655005   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.811804713s)
	I0621 17:42:40.655028   15966 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.793484885s)
	I0621 17:42:40.655046   15966 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 17:42:40.655050   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:40.655064   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:40.655005   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.803534138s)
	I0621 17:42:40.655089   15966 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.793599906s)
	I0621 17:42:40.655122   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:40.655134   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:40.655142   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.761041879s)
	I0621 17:42:40.655166   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:40.655177   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:40.655221   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.626179398s)
	I0621 17:42:40.655238   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:40.655246   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:40.655600   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:40.655620   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:40.655628   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:40.655637   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:40.655846   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:40.655874   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:40.655881   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:40.655888   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:40.655895   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:40.655952   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:40.655986   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:40.655981   15966 node_ready.go:35] waiting up to 6m0s for node "addons-299362" to be "Ready" ...
	I0621 17:42:40.655992   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:40.656001   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:40.656008   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:40.656070   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:40.656100   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:40.656107   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:40.656115   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:40.656123   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:40.656152   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:40.656176   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:40.656182   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:40.656309   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:40.656332   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:40.656340   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:40.657250   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:40.657295   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:40.657305   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:40.657316   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:40.657343   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:40.657350   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:40.746054   15966 node_ready.go:49] node "addons-299362" has status "Ready":"True"
	I0621 17:42:40.746082   15966 node_ready.go:38] duration metric: took 90.08398ms for node "addons-299362" to be "Ready" ...
	I0621 17:42:40.746097   15966 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0621 17:42:40.839186   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:40.839216   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:40.839489   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:40.839511   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:40.843311   15966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cnln6" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.166377   15966 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-299362" context rescaled to 1 replicas
	I0621 17:42:41.860281   15966 pod_ready.go:92] pod "coredns-7db6d8ff4d-cnln6" in "kube-system" namespace has status "Ready":"True"
	I0621 17:42:41.860308   15966 pod_ready.go:81] duration metric: took 1.016974069s for pod "coredns-7db6d8ff4d-cnln6" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.860319   15966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dh4df" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.874158   15966 pod_ready.go:92] pod "coredns-7db6d8ff4d-dh4df" in "kube-system" namespace has status "Ready":"True"
	I0621 17:42:41.874179   15966 pod_ready.go:81] duration metric: took 13.853929ms for pod "coredns-7db6d8ff4d-dh4df" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.874189   15966 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-299362" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.882733   15966 pod_ready.go:92] pod "etcd-addons-299362" in "kube-system" namespace has status "Ready":"True"
	I0621 17:42:41.882758   15966 pod_ready.go:81] duration metric: took 8.562278ms for pod "etcd-addons-299362" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.882770   15966 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-299362" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.892913   15966 pod_ready.go:92] pod "kube-apiserver-addons-299362" in "kube-system" namespace has status "Ready":"True"
	I0621 17:42:41.892939   15966 pod_ready.go:81] duration metric: took 10.16035ms for pod "kube-apiserver-addons-299362" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.892951   15966 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-299362" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.902027   15966 pod_ready.go:92] pod "kube-controller-manager-addons-299362" in "kube-system" namespace has status "Ready":"True"
	I0621 17:42:41.902051   15966 pod_ready.go:81] duration metric: took 9.088563ms for pod "kube-controller-manager-addons-299362" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:41.902063   15966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gml64" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:42.332207   15966 pod_ready.go:92] pod "kube-proxy-gml64" in "kube-system" namespace has status "Ready":"True"
	I0621 17:42:42.332240   15966 pod_ready.go:81] duration metric: took 430.16837ms for pod "kube-proxy-gml64" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:42.332256   15966 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-299362" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:42.534393   15966 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0621 17:42:42.534425   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:42.537448   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:42.537910   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:42.537937   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:42.538187   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:42.538412   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:42.538595   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:42.538760   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:42.683473   15966 pod_ready.go:92] pod "kube-scheduler-addons-299362" in "kube-system" namespace has status "Ready":"True"
	I0621 17:42:42.683502   15966 pod_ready.go:81] duration metric: took 351.236037ms for pod "kube-scheduler-addons-299362" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:42.683517   15966 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace to be "Ready" ...
	I0621 17:42:43.017016   15966 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0621 17:42:43.209318   15966 addons.go:234] Setting addon gcp-auth=true in "addons-299362"
	I0621 17:42:43.209383   15966 host.go:66] Checking if "addons-299362" exists ...
	I0621 17:42:43.209856   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:43.209893   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:43.225218   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42801
	I0621 17:42:43.225733   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:43.226271   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:43.226293   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:43.226708   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:43.227230   15966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 17:42:43.227262   15966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 17:42:43.242404   15966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
	I0621 17:42:43.242938   15966 main.go:141] libmachine: () Calling .GetVersion
	I0621 17:42:43.243432   15966 main.go:141] libmachine: Using API Version  1
	I0621 17:42:43.243448   15966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 17:42:43.243828   15966 main.go:141] libmachine: () Calling .GetMachineName
	I0621 17:42:43.244019   15966 main.go:141] libmachine: (addons-299362) Calling .GetState
	I0621 17:42:43.245392   15966 main.go:141] libmachine: (addons-299362) Calling .DriverName
	I0621 17:42:43.245643   15966 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0621 17:42:43.245680   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHHostname
	I0621 17:42:43.248264   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:43.248611   15966 main.go:141] libmachine: (addons-299362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:bb:14", ip: ""} in network mk-addons-299362: {Iface:virbr1 ExpiryTime:2024-06-21 18:41:56 +0000 UTC Type:0 Mac:52:54:00:4a:bb:14 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-299362 Clientid:01:52:54:00:4a:bb:14}
	I0621 17:42:43.248643   15966 main.go:141] libmachine: (addons-299362) DBG | domain addons-299362 has defined IP address 192.168.39.187 and MAC address 52:54:00:4a:bb:14 in network mk-addons-299362
	I0621 17:42:43.248801   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHPort
	I0621 17:42:43.248969   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHKeyPath
	I0621 17:42:43.249108   15966 main.go:141] libmachine: (addons-299362) Calling .GetSSHUsername
	I0621 17:42:43.249236   15966 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/addons-299362/id_rsa Username:docker}
	I0621 17:42:43.465573   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.434875903s)
	I0621 17:42:43.465624   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.465584   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.364569769s)
	I0621 17:42:43.465666   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.301197025s)
	I0621 17:42:43.465672   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.465705   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.465711   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.465717   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.465633   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.465761   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.207722411s)
	I0621 17:42:43.465813   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.465815   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.117872414s)
	I0621 17:42:43.465822   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.465872   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.912949956s)
	I0621 17:42:43.465887   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.465897   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.465836   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.465940   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.466079   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.805487844s)
	I0621 17:42:43.466108   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.466119   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.466251   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.466276   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.466291   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.204873647s)
	I0621 17:42:43.466308   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	W0621 17:42:43.466314   15966 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0621 17:42:43.466323   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.466338   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.466335   15966 retry.go:31] will retry after 334.406266ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0621 17:42:43.466356   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.466363   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.466373   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.466377   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.466381   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.466387   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.466395   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.466394   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.620334971s)
	I0621 17:42:43.466403   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.466417   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.466427   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.466427   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.466435   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.466443   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.466450   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.466476   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.466486   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.466493   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.466499   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.466506   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.466514   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.466522   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.466543   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.466500   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.466560   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.466613   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.466620   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.466628   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.466635   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.466687   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.466707   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.466713   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.467014   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.467106   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.467124   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.467894   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.467929   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.467937   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.467983   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.468008   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.468037   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.468044   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.468052   15966 addons.go:475] Verifying addon registry=true in "addons-299362"
	I0621 17:42:43.468181   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.468203   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.468210   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.468217   15966 addons.go:475] Verifying addon ingress=true in "addons-299362"
	I0621 17:42:43.468132   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.469737   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.469748   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.469755   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.468149   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.468166   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.469825   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.470281   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.470316   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.470342   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.470358   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.470034   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.470478   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.470051   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.470649   15966 out.go:177] * Verifying registry addon...
	I0621 17:42:43.470730   15966 out.go:177] * Verifying ingress addon...
	I0621 17:42:43.470800   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:43.470805   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.471234   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.471245   15966 addons.go:475] Verifying addon metrics-server=true in "addons-299362"
	I0621 17:42:43.471509   15966 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-299362 service yakd-dashboard -n yakd-dashboard
	
	I0621 17:42:43.472482   15966 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0621 17:42:43.473271   15966 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0621 17:42:43.506033   15966 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0621 17:42:43.506053   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:43.506133   15966 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0621 17:42:43.506155   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:43.547750   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:43.547769   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:43.548160   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:43.548177   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:43.801897   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0621 17:42:43.977586   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:43.978064   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:44.493733   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:44.493780   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:44.715169   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:42:45.006706   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:45.036508   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:45.163329   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.206802363s)
	I0621 17:42:45.163379   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:45.163388   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:45.163342   15966 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.917674402s)
	I0621 17:42:45.163624   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:45.163671   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:45.163695   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:45.163707   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:45.163950   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:45.163965   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:45.163975   15966 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-299362"
	I0621 17:42:45.165241   15966 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0621 17:42:45.165252   15966 out.go:177] * Verifying csi-hostpath-driver addon...
	I0621 17:42:45.167194   15966 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0621 17:42:45.168134   15966 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0621 17:42:45.168457   15966 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0621 17:42:45.168479   15966 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0621 17:42:45.195545   15966 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0621 17:42:45.195567   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:45.232464   15966 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0621 17:42:45.232489   15966 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0621 17:42:45.367650   15966 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0621 17:42:45.367683   15966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0621 17:42:45.420787   15966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0621 17:42:45.477546   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:45.495114   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:45.679352   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:45.837163   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.035204096s)
	I0621 17:42:45.837231   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:45.837248   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:45.837599   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:45.837622   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:45.837634   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:45.837642   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:45.837650   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:45.837932   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:45.837968   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:45.837977   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:45.978334   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:45.979254   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:46.174010   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:46.484004   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:46.492302   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:46.702071   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:46.731153   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:42:46.784765   15966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.363926475s)
	I0621 17:42:46.784810   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:46.784822   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:46.785085   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:46.785136   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:46.785140   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:46.785172   15966 main.go:141] libmachine: Making call to close driver server
	I0621 17:42:46.785183   15966 main.go:141] libmachine: (addons-299362) Calling .Close
	I0621 17:42:46.785399   15966 main.go:141] libmachine: Successfully made call to close driver server
	I0621 17:42:46.785415   15966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 17:42:46.785434   15966 main.go:141] libmachine: (addons-299362) DBG | Closing plugin on server side
	I0621 17:42:46.787076   15966 addons.go:475] Verifying addon gcp-auth=true in "addons-299362"
	I0621 17:42:46.788660   15966 out.go:177] * Verifying gcp-auth addon...
	I0621 17:42:46.790965   15966 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0621 17:42:46.796730   15966 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0621 17:42:46.796745   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:46.980763   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:46.994084   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:47.175556   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:47.294873   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:47.476883   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:47.477989   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:47.673964   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:47.795356   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:47.986089   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:47.987021   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:48.174250   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:48.295664   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:48.479086   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:48.479527   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:48.675498   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:48.795049   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:48.980473   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:48.980721   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:49.179674   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:49.193205   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:42:49.294943   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:49.481711   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:49.481721   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:49.673290   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:49.794919   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:49.977996   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:49.989318   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:50.173584   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:50.294082   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:50.478570   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:50.479682   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:50.674178   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:50.794187   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:50.977391   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:50.978133   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:51.173448   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:51.297608   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:51.478097   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:51.479599   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:51.673173   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:51.689301   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:42:51.794600   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:51.981884   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:51.982029   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:52.173676   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:52.294759   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:52.477031   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:52.477186   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:52.673756   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:52.795432   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:52.977982   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:52.978546   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:53.174055   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:53.295304   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:53.478397   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:53.478720   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:53.674242   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:53.689925   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:42:53.794999   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:53.978126   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:53.978200   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:54.174105   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:54.294563   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:54.477320   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:54.477853   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:54.674595   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:54.794821   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:54.978808   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:54.983119   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:55.173353   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:55.294314   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:55.479693   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:55.480517   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:55.674422   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:55.794783   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:55.978078   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:55.978367   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:56.174677   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:56.189612   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:42:56.294386   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:56.478043   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:56.478490   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:56.676742   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:56.794674   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:56.979740   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:56.980858   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:57.174336   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:57.296549   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:57.478558   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:57.478759   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:57.673513   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:57.794482   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:57.978165   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:57.978384   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:58.174788   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:58.294788   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:58.477731   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:58.478282   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:58.674031   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:58.689645   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:42:58.794540   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:58.978673   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:58.979232   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:59.173209   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:59.295029   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:59.479361   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:42:59.483553   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:59.673681   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:42:59.794472   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:42:59.982930   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:42:59.984374   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:00.173983   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:00.295256   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:00.477956   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:00.478077   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:00.674145   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:00.794595   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:00.978232   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:00.978597   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:01.172993   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:01.189326   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:01.294322   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:01.477896   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:01.477988   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:01.674008   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:01.795437   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:01.977199   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:01.979331   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:02.173032   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:02.295800   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:02.477287   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:02.477487   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:02.673631   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:02.794880   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:02.977935   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:02.978410   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:03.173738   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:03.189875   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:03.295654   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:03.477898   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:03.479445   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:03.673570   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:03.794844   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:03.977267   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:03.980115   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:04.174607   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:04.295032   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:04.477737   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:04.478703   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:04.675618   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:04.795393   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:04.979212   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:04.982324   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:05.174736   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:05.190146   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:05.296510   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:05.479225   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:05.480905   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:05.674585   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:05.794753   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:05.976888   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:05.978648   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:06.174361   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:06.294531   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:06.478839   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:06.482267   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:06.674345   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:07.231119   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:07.231490   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:07.232221   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:07.232658   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:07.236166   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:07.295054   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:07.477038   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:07.479246   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:07.675412   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:07.794867   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:07.976698   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:07.977390   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:08.173659   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:08.295312   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:08.479131   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:08.479166   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:08.673352   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:08.794878   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:08.977004   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:08.978809   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:09.174832   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:09.295247   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:09.478022   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:09.478338   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:09.674767   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:09.688748   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:09.795031   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:09.979868   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:09.980398   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:10.174976   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:10.294746   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:10.477238   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:10.479525   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:10.673887   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:10.799572   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:10.978292   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:10.978567   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:11.174085   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:11.294507   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:11.477943   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:11.478182   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:11.675449   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:11.689935   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:11.795755   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:11.977603   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:11.979871   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:12.173759   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:12.297123   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:12.689412   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:12.689975   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:12.690852   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:12.794148   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:12.977731   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:12.978105   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:13.174057   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:13.294920   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:13.478313   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:13.478705   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:13.675490   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:13.691056   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:13.795283   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:13.978328   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:13.978670   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:14.173669   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:14.295236   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:14.478153   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:14.478252   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:14.675081   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:14.794462   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:14.980872   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:14.981768   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:15.173196   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:15.295328   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:15.480539   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:15.480697   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:15.673897   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:15.694877   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:15.795074   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:15.976738   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:15.977812   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:16.175952   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:16.296102   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:16.477670   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:16.478470   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:16.673929   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:16.794771   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:16.976925   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:16.979006   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:17.172717   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:17.294805   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:17.476723   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:17.480484   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:17.674220   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:17.795014   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:17.978849   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:17.979740   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:18.173243   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:18.189663   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:18.294775   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:18.477846   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:18.479168   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:18.680823   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:18.794237   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:18.980458   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:18.982233   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:19.174529   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:19.294839   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:19.481245   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:19.482500   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:20.075373   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:20.075796   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:20.078073   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:20.078259   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:20.173745   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:20.194179   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:20.299142   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:20.490532   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:20.491312   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:20.676458   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:20.795092   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:20.977867   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:20.978040   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:21.175081   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:21.295201   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:21.477672   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:21.478215   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:21.673431   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:21.794521   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:21.977986   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:21.978313   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:22.173915   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:22.297484   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:22.479146   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:22.481552   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:22.672815   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:22.689952   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:22.795374   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:22.976987   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:22.979782   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:23.173353   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:23.296261   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:23.477910   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:23.478000   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:23.676188   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:23.794386   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:23.977513   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:23.978664   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:24.173152   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:24.295319   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:24.478712   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:24.479714   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:24.674005   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:24.795366   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:24.979309   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:24.979669   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:25.172574   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:25.188909   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:25.295363   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:25.478426   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:25.479058   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:25.675316   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:25.793921   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:25.977994   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:25.978528   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:26.173943   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:26.294620   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:26.477704   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:26.477785   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:26.676310   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:26.795814   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:26.977860   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:26.980381   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:27.181481   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:27.295345   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:27.477365   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:27.477517   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:27.674671   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:27.689351   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:27.794987   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:27.979026   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:27.979951   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:28.174283   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:28.294821   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:28.477131   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:28.479882   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:28.673537   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:28.795337   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:28.978456   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:28.979126   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:29.173387   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:29.295319   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:29.477978   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:29.478589   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:29.673500   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:29.711031   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:29.799770   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:29.978005   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:29.978062   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:30.174188   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:30.294253   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:30.740363   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:30.740808   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:30.742354   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:30.795464   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:30.978072   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:30.978355   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:31.174309   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:31.294764   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:31.478104   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:31.478450   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:31.672850   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:31.794653   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:31.977023   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:31.977757   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:32.173740   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:32.188814   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:32.295553   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:32.478343   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:32.478581   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:32.674580   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:32.795361   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:32.977773   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:32.978278   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:33.194268   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:33.294387   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:33.479546   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:33.479743   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:33.674098   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:33.794115   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:33.978050   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:33.980589   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:34.174004   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:34.189273   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:34.294238   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:34.478532   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:34.479027   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:34.673564   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:34.794976   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:34.978356   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:34.979273   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:35.175562   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:35.295420   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:35.479570   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:35.479670   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:35.683078   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:36.066285   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:36.067285   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:36.069881   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:36.174782   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:36.295728   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:36.479467   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:36.480503   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:36.674860   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:36.691433   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:36.799854   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:36.981328   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:36.983112   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:37.173783   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:37.294027   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:37.477054   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0621 17:43:37.481292   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:37.673963   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:37.795009   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:37.979452   15966 kapi.go:107] duration metric: took 54.506967938s to wait for kubernetes.io/minikube-addons=registry ...
	I0621 17:43:37.980075   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:38.173542   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:38.294990   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:38.478275   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:38.673736   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:38.795481   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:38.978357   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:39.173869   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:39.189363   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:39.295293   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:39.478404   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:39.673962   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:39.795692   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:39.977143   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:40.173753   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:40.294291   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:40.478082   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:40.683399   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:40.794644   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:40.978826   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:41.173259   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:41.301562   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:41.478271   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:41.673781   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:41.692767   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:41.794699   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:41.985673   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:42.174635   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:42.294799   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:42.477774   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:42.673247   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:42.795914   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:42.978806   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:43.173973   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:43.294984   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:43.477904   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:43.673395   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:43.794494   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:43.977415   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:44.176355   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:44.188424   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:44.294544   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:44.477632   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:44.674798   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:44.794628   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:44.977482   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:45.174432   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:45.296436   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:45.478863   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:45.673578   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:45.795040   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:45.977706   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:46.192532   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:46.207958   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:46.298990   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:46.479693   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:46.676561   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:46.794876   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:46.981483   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:47.175696   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:47.297487   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:47.477571   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:47.674383   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:47.796230   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:47.980399   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:48.174137   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:48.294756   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:48.477284   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:48.677206   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:48.691172   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:48.796143   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:48.979294   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:49.175672   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:49.294911   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:49.479460   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:49.676324   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:49.797178   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:49.978055   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:50.173939   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:50.295538   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:50.783673   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:50.784312   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:50.788579   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:50.802037   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:50.978481   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:51.174656   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:51.294943   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:51.478362   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:51.678211   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:51.798555   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:51.977032   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:52.173906   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:52.294781   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:52.477978   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:52.674240   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:52.795131   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:52.978714   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:53.173548   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:53.189444   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:53.294210   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:53.478553   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:53.673712   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:53.795402   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:53.977968   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:54.330785   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:54.333591   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:54.477588   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:54.674481   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:54.794969   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:54.977973   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:55.175969   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:55.294833   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:55.478670   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:55.676962   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:55.691684   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:55.795117   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:55.977644   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:56.176225   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:56.294187   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:56.478294   15966 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0621 17:43:56.676667   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:56.795455   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:56.977661   15966 kapi.go:107] duration metric: took 1m13.504390194s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0621 17:43:57.173229   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:57.295520   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:57.673112   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:57.794805   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:58.173751   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:58.189339   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:43:58.294320   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:58.674160   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:58.795002   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:59.174318   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:43:59.294611   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:43:59.676102   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:00.239407   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:44:00.247917   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:00.249148   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:44:00.295878   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0621 17:44:00.674591   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:00.797566   15966 kapi.go:107] duration metric: took 1m14.006599614s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0621 17:44:00.799494   15966 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-299362 cluster.
	I0621 17:44:00.800830   15966 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0621 17:44:00.802396   15966 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0621 17:44:01.175969   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:01.674442   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:02.173247   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:02.675669   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:02.698380   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:44:03.174772   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:03.674164   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:04.174292   15966 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0621 17:44:04.673714   15966 kapi.go:107] duration metric: took 1m19.505578984s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0621 17:44:04.675679   15966 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0621 17:44:04.677311   15966 addons.go:510] duration metric: took 1m29.375305798s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner-rancher storage-provisioner helm-tiller inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0621 17:44:05.189934   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:44:07.699152   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:44:10.189427   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:44:12.190529   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:44:14.191019   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:44:16.696604   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:44:19.189886   15966 pod_ready.go:102] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"False"
	I0621 17:44:20.689689   15966 pod_ready.go:92] pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace has status "Ready":"True"
	I0621 17:44:20.689711   15966 pod_ready.go:81] duration metric: took 1m38.006186795s for pod "metrics-server-c59844bb4-7bhms" in "kube-system" namespace to be "Ready" ...
	I0621 17:44:20.689722   15966 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mqvml" in "kube-system" namespace to be "Ready" ...
	I0621 17:44:20.693780   15966 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-mqvml" in "kube-system" namespace has status "Ready":"True"
	I0621 17:44:20.693829   15966 pod_ready.go:81] duration metric: took 4.099075ms for pod "nvidia-device-plugin-daemonset-mqvml" in "kube-system" namespace to be "Ready" ...
	I0621 17:44:20.693847   15966 pod_ready.go:38] duration metric: took 1m39.947737623s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0621 17:44:20.693864   15966 api_server.go:52] waiting for apiserver process to appear ...
	I0621 17:44:20.693893   15966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0621 17:44:20.693943   15966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0621 17:44:20.746187   15966 cri.go:89] found id: "172afe1c8df78cb13c4d14aaa3e2c96955a434c495b931ec0025355728cc0c84"
	I0621 17:44:20.746210   15966 cri.go:89] found id: ""
	I0621 17:44:20.746217   15966 logs.go:276] 1 containers: [172afe1c8df78cb13c4d14aaa3e2c96955a434c495b931ec0025355728cc0c84]
	I0621 17:44:20.746259   15966 ssh_runner.go:195] Run: which crictl
	I0621 17:44:20.750551   15966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0621 17:44:20.750606   15966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0621 17:44:20.795548   15966 cri.go:89] found id: "903013fcca540f8bf0261458099a781f4f5a3d4f41a57a029a65b1a79085884f"
	I0621 17:44:20.795571   15966 cri.go:89] found id: ""
	I0621 17:44:20.795579   15966 logs.go:276] 1 containers: [903013fcca540f8bf0261458099a781f4f5a3d4f41a57a029a65b1a79085884f]
	I0621 17:44:20.795622   15966 ssh_runner.go:195] Run: which crictl
	I0621 17:44:20.800048   15966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0621 17:44:20.800105   15966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0621 17:44:20.837272   15966 cri.go:89] found id: "6c64fdd9272b3121b4e27f465daa7b64faa8d158e73909ea192346309140ef9a"
	I0621 17:44:20.837296   15966 cri.go:89] found id: ""
	I0621 17:44:20.837305   15966 logs.go:276] 1 containers: [6c64fdd9272b3121b4e27f465daa7b64faa8d158e73909ea192346309140ef9a]
	I0621 17:44:20.837349   15966 ssh_runner.go:195] Run: which crictl
	I0621 17:44:20.841825   15966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0621 17:44:20.841900   15966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0621 17:44:20.877916   15966 cri.go:89] found id: "9a1850def884a4ee493921bc6e0cf02cd984de3b810c1f4f58373179d9cdb59b"
	I0621 17:44:20.877953   15966 cri.go:89] found id: ""
	I0621 17:44:20.877960   15966 logs.go:276] 1 containers: [9a1850def884a4ee493921bc6e0cf02cd984de3b810c1f4f58373179d9cdb59b]
	I0621 17:44:20.878002   15966 ssh_runner.go:195] Run: which crictl
	I0621 17:44:20.881813   15966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0621 17:44:20.881897   15966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0621 17:44:20.924319   15966 cri.go:89] found id: "425ded3f5fbd127782447ba12f42c1d2d75dfc2a65d10c76feceeb8a17534377"
	I0621 17:44:20.924338   15966 cri.go:89] found id: ""
	I0621 17:44:20.924344   15966 logs.go:276] 1 containers: [425ded3f5fbd127782447ba12f42c1d2d75dfc2a65d10c76feceeb8a17534377]
	I0621 17:44:20.924387   15966 ssh_runner.go:195] Run: which crictl
	I0621 17:44:20.928248   15966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0621 17:44:20.928329   15966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0621 17:44:20.964969   15966 cri.go:89] found id: "39db13f3025dac0d47c10c12ef2c49faae53c5faf8f19f2c7934b1e8345a7b16"
	I0621 17:44:20.964993   15966 cri.go:89] found id: ""
	I0621 17:44:20.965000   15966 logs.go:276] 1 containers: [39db13f3025dac0d47c10c12ef2c49faae53c5faf8f19f2c7934b1e8345a7b16]
	I0621 17:44:20.965046   15966 ssh_runner.go:195] Run: which crictl
	I0621 17:44:20.969732   15966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0621 17:44:20.969786   15966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0621 17:44:21.020696   15966 cri.go:89] found id: ""
	I0621 17:44:21.020719   15966 logs.go:276] 0 containers: []
	W0621 17:44:21.020728   15966 logs.go:278] No container was found matching "kindnet"
	I0621 17:44:21.020738   15966 logs.go:123] Gathering logs for container status ...
	I0621 17:44:21.020753   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0621 17:44:21.066899   15966 logs.go:123] Gathering logs for describe nodes ...
	I0621 17:44:21.066932   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0621 17:44:21.204819   15966 logs.go:123] Gathering logs for etcd [903013fcca540f8bf0261458099a781f4f5a3d4f41a57a029a65b1a79085884f] ...
	I0621 17:44:21.204848   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 903013fcca540f8bf0261458099a781f4f5a3d4f41a57a029a65b1a79085884f"
	I0621 17:44:21.268619   15966 logs.go:123] Gathering logs for coredns [6c64fdd9272b3121b4e27f465daa7b64faa8d158e73909ea192346309140ef9a] ...
	I0621 17:44:21.268649   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c64fdd9272b3121b4e27f465daa7b64faa8d158e73909ea192346309140ef9a"
	I0621 17:44:21.304783   15966 logs.go:123] Gathering logs for kube-scheduler [9a1850def884a4ee493921bc6e0cf02cd984de3b810c1f4f58373179d9cdb59b] ...
	I0621 17:44:21.304818   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a1850def884a4ee493921bc6e0cf02cd984de3b810c1f4f58373179d9cdb59b"
	I0621 17:44:21.362492   15966 logs.go:123] Gathering logs for kube-proxy [425ded3f5fbd127782447ba12f42c1d2d75dfc2a65d10c76feceeb8a17534377] ...
	I0621 17:44:21.362523   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 425ded3f5fbd127782447ba12f42c1d2d75dfc2a65d10c76feceeb8a17534377"
	I0621 17:44:21.397649   15966 logs.go:123] Gathering logs for kube-controller-manager [39db13f3025dac0d47c10c12ef2c49faae53c5faf8f19f2c7934b1e8345a7b16] ...
	I0621 17:44:21.397680   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39db13f3025dac0d47c10c12ef2c49faae53c5faf8f19f2c7934b1e8345a7b16"
	I0621 17:44:21.463414   15966 logs.go:123] Gathering logs for CRI-O ...
	I0621 17:44:21.463447   15966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-299362 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (134.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-406291 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-406291 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 80 (2m12.649039424s)

                                                
                                                
-- stdout --
	* [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.168.39.198
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.198
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc0008bcf50 https:0xc0008bcfa0] Dir:false Pr
ogressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	* 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 start -p ha-406291 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.168929886s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-620822 ssh sudo cat                                            | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | /etc/test/nested/copy/15329/hosts                                         |                   |         |         |                     |                     |
	| image          | functional-620822 image ls                                                | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	| image          | functional-620822 image load --daemon                                     | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-620822                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| service        | functional-620822 service                                                 | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | hello-node-connect --url                                                  |                   |         |         |                     |                     |
	| update-context | functional-620822                                                         | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| update-context | functional-620822                                                         | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| update-context | functional-620822                                                         | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| image          | functional-620822 image ls                                                | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	| image          | functional-620822 image load --daemon                                     | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-620822                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-620822 image ls                                                | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	| image          | functional-620822 image save                                              | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-620822                  |                   |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-620822 image rm                                                | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-620822                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-620822 image ls                                                | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	| image          | functional-620822 image load                                              | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-620822 image ls                                                | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	| image          | functional-620822 image save --daemon                                     | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-620822                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-620822                                                         | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | image ls --format short                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-620822                                                         | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | image ls --format yaml                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-620822                                                         | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | image ls --format json                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-620822                                                         | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | image ls --format table                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| ssh            | functional-620822 ssh pgrep                                               | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC |                     |
	|                | buildkitd                                                                 |                   |         |         |                     |                     |
	| image          | functional-620822 image build -t                                          | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	|                | localhost/my-image:functional-620822                                      |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                   |         |         |                     |                     |
	| image          | functional-620822 image ls                                                | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	| delete         | -p functional-620822                                                      | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	| start          | -p ha-406291 --wait=true                                                  | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC |                     |
	|                | --memory=2200 --ha                                                        |                   |         |         |                     |                     |
	|                | -v=7 --alsologtostderr                                                    |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                             |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                   |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.822540960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4e45a41-ec4f-40c7-8779-0432ec70b729 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.823569338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8270b6ac-090c-48b1-81cc-fe2d46f0328b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.823987700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718994535823964068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136079,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8270b6ac-090c-48b1-81cc-fe2d46f0328b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.824514756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f5d9d00-0a59-4e6b-b100-04f7445422b5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.824567597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f5d9d00-0a59-4e6b-b100-04f7445422b5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.824770463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5
af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718994
458069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f5d9d00-0a59-4e6b-b100-04f7445422b5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.829122752Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=bd91b3c2-2aa2-4bc3-be38-2bf665719483 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.829883079Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f6a39ae0-87ac-492a-a711-290e61bb895e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994459650788102,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"
},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-21T18:27:39.331926430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7ng4v,Uid:4724701c-6f0e-45ed-8fc7-70245d4fa569,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994459636285025,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,k8s-app: kube-dns,pod
-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:39.324840171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nx5xs,Uid:375157ef-5af0-41b9-8ed9-162e5a88c679,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994459635123081,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:39.328881687Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&PodSandboxMetadata{Name:kube-proxy-xnbqj,Uid:11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994457732197222,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:37.424597593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&PodSandboxMetadata{Name:kindnet-vnds7,Uid:e921d86f-0ac3-413e-9e85-e809139ca210,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994457715084104,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,k8s-app: kindnet,pod-template
-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:37.400904877Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-406291,Uid:81efe8b097b0aaeaaac87f9a6e2dfe3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437888590878,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 81efe8b097b0aaeaaac87f9a6e2dfe3b,kubernetes.io/config.seen: 2024-06-21T18:27:17.383181217Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-406291,Uid:29b
f44d365a415a68be28c9aad205c23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437887303918,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{kubernetes.io/config.hash: 29bf44d365a415a68be28c9aad205c23,kubernetes.io/config.seen: 2024-06-21T18:27:17.383182123Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&PodSandboxMetadata{Name:etcd-ha-406291,Uid:28eb1f9a7974972f95837a71475ffe97,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437864857022,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,tier: control-plane,},Annotations:map[string]s
tring{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.198:2379,kubernetes.io/config.hash: 28eb1f9a7974972f95837a71475ffe97,kubernetes.io/config.seen: 2024-06-21T18:27:17.383174241Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-406291,Uid:ac2d2e5dadb6d48084ee46b3119245c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437841913023,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.198:8443,kubernetes.io/config.hash: ac2d2e5dadb6d48084ee46b3119245c5,kubernetes.io/config.seen: 2024-06-21T18:27:17.383178563Z,kubernetes.io/config.s
ource: file,},RuntimeHandler:,},&PodSandbox{Id:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-406291,Uid:8bd582f38b9812a77200f468c3cf9c0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437841113621,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8bd582f38b9812a77200f468c3cf9c0d,kubernetes.io/config.seen: 2024-06-21T18:27:17.383179836Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bd91b3c2-2aa2-4bc3-be38-2bf665719483 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.830505800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b2f14e2-2ee6-4e76-942a-7823b78bb2f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.830554084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b2f14e2-2ee6-4e76-942a-7823b78bb2f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.830748784Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5
af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718994
458069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b2f14e2-2ee6-4e76-942a-7823b78bb2f1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.860959537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1cea5a18-1e58-4b68-8f8b-62a411140d7f name=/runtime.v1.RuntimeService/Version
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.861045861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1cea5a18-1e58-4b68-8f8b-62a411140d7f name=/runtime.v1.RuntimeService/Version
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.862219381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e5a5dff-434c-4680-83af-4f6f24c87666 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.862602659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718994535862579359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136079,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e5a5dff-434c-4680-83af-4f6f24c87666 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.863228019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a48ad0f1-2025-4c80-914f-39fcd533cea0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.863282163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a48ad0f1-2025-4c80-914f-39fcd533cea0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.863481461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5
af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718994
458069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a48ad0f1-2025-4c80-914f-39fcd533cea0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.899830542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8155c29-9fb1-4814-a565-7424ffa9769c name=/runtime.v1.RuntimeService/Version
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.899921021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8155c29-9fb1-4814-a565-7424ffa9769c name=/runtime.v1.RuntimeService/Version
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.900895175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd8738ac-fb40-4078-a1a1-af70d5ddd84b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.901479971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718994535901448581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:136079,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd8738ac-fb40-4078-a1a1-af70d5ddd84b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.901982840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=097c3a5f-f9c0-4a33-95af-edd3dd9ead1d name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.902035060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=097c3a5f-f9c0-4a33-95af-edd3dd9ead1d name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:28:55 ha-406291 crio[679]: time="2024-06-21 18:28:55.902285318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5
af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718994
458069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=097c3a5f-f9c0-4a33-95af-edd3dd9ead1d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                    About a minute ago   Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                    About a minute ago   Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    About a minute ago   Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                    About a minute ago   Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                    About a minute ago   Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f   About a minute ago   Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                    About a minute ago   Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                    About a minute ago   Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                    About a minute ago   Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                    About a minute ago   Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:28:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:27:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:27:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:27:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:27:39 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     79s
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     79s
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      79s
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 77s   kube-proxy       
	  Normal  Starting                 92s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  92s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           80s   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                77s   kubelet          Node ha-406291 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.510332Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","added-peer-id":"f1d2ab5330a2a0e3","added-peer-peer-urls":["https://192.168.39.198:2380"]}
	{"level":"info","ts":"2024-06-21T18:27:18.510599Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-21T18:27:18.510131Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:27:18.512305Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:27:18.939239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.939332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:28:56 up 2 min,  0 users,  load average: 0.39, 0.25, 0.10
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:27:38.526466       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0621 18:27:38.526639       1 main.go:107] hostIP = 192.168.39.198
	podIP = 192.168.39.198
	I0621 18:27:38.526767       1 main.go:116] setting mtu 1500 for CNI 
	I0621 18:27:38.526806       1 main.go:146] kindnetd IP family: "ipv4"
	I0621 18:27:38.526839       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0621 18:27:38.925421       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:27:38.925483       1 main.go:227] handling current node
	I0621 18:27:48.946917       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:27:48.947039       1 main.go:227] handling current node
	I0621 18:27:58.955943       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:27:58.955999       1 main.go:227] handling current node
	I0621 18:28:08.959980       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:28:08.960127       1 main.go:227] handling current node
	I0621 18:28:18.967622       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:28:18.967699       1 main.go:227] handling current node
	I0621 18:28:28.971777       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:28:28.972007       1 main.go:227] handling current node
	I0621 18:28:38.976413       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:28:38.976517       1 main.go:227] handling current node
	I0621 18:28:48.989811       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:28:48.989884       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.223644       1 aggregator.go:165] initial CRD sync complete...
	I0621 18:27:21.223665       1 autoregister_controller.go:141] Starting autoregister controller
	I0621 18:27:21.223672       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0621 18:27:21.223679       1 cache.go:39] Caches are synced for autoregister controller
	I0621 18:27:21.228827       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:36.884308       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0621 18:27:36.943285       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0621 18:27:36.943901       1 shared_informer.go:320] Caches are synced for endpoint
	I0621 18:27:36.950305       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0621 18:27:36.991249       1 shared_informer.go:320] Caches are synced for disruption
	I0621 18:27:36.996032       1 shared_informer.go:320] Caches are synced for cronjob
	I0621 18:27:36.997228       1 shared_informer.go:320] Caches are synced for stateful set
	I0621 18:27:37.047455       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 18:27:37.059247       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 18:27:37.506333       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:27:37.559310       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:27:37.559392       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0621 18:27:37.600276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="666.508123ms"
	I0621 18:27:37.660728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.34673ms"
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:27:37 ha-406291 kubelet[1367]: I0621 18:27:37.499622    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5cz\" (UniqueName: \"kubernetes.io/projected/11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073-kube-api-access-jz5cz\") pod \"kube-proxy-xnbqj\" (UID: \"11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073\") " pod="kube-system/kube-proxy-xnbqj"
	Jun 21 18:27:37 ha-406291 kubelet[1367]: I0621 18:27:37.499661    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e921d86f-0ac3-413e-9e85-e809139ca210-lib-modules\") pod \"kindnet-vnds7\" (UID: \"e921d86f-0ac3-413e-9e85-e809139ca210\") " pod="kube-system/kindnet-vnds7"
	Jun 21 18:27:37 ha-406291 kubelet[1367]: I0621 18:27:37.499676    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xggt5\" (UniqueName: \"kubernetes.io/projected/e921d86f-0ac3-413e-9e85-e809139ca210-kube-api-access-xggt5\") pod \"kindnet-vnds7\" (UID: \"e921d86f-0ac3-413e-9e85-e809139ca210\") " pod="kube-system/kindnet-vnds7"
	Jun 21 18:27:37 ha-406291 kubelet[1367]: I0621 18:27:37.499691    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073-kube-proxy\") pod \"kube-proxy-xnbqj\" (UID: \"11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073\") " pod="kube-system/kube-proxy-xnbqj"
	Jun 21 18:27:37 ha-406291 kubelet[1367]: I0621 18:27:37.499713    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e921d86f-0ac3-413e-9e85-e809139ca210-xtables-lock\") pod \"kindnet-vnds7\" (UID: \"e921d86f-0ac3-413e-9e85-e809139ca210\") " pod="kube-system/kindnet-vnds7"
	Jun 21 18:27:38 ha-406291 kubelet[1367]: I0621 18:27:38.569981    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vnds7" podStartSLOduration=1.569956367 podStartE2EDuration="1.569956367s" podCreationTimestamp="2024-06-21 18:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-21 18:27:38.569450356 +0000 UTC m=+14.270448607" watchObservedRunningTime="2024-06-21 18:27:38.569956367 +0000 UTC m=+14.270954615"
	Jun 21 18:27:38 ha-406291 kubelet[1367]: I0621 18:27:38.570080    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xnbqj" podStartSLOduration=1.570074463 podStartE2EDuration="1.570074463s" podCreationTimestamp="2024-06-21 18:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-21 18:27:38.553429936 +0000 UTC m=+14.254428186" watchObservedRunningTime="2024-06-21 18:27:38.570074463 +0000 UTC m=+14.271072713"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.286317    1367 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.324991    1367 topology_manager.go:215] "Topology Admit Handler" podUID="4724701c-6f0e-45ed-8fc7-70245d4fa569" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7ng4v"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.329106    1367 topology_manager.go:215] "Topology Admit Handler" podUID="375157ef-5af0-41b9-8ed9-162e5a88c679" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nx5xs"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.331971    1367 topology_manager.go:215] "Topology Admit Handler" podUID="f6a39ae0-87ac-492a-a711-290e61bb895e" podNamespace="kube-system" podName="storage-provisioner"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.417475    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z767\" (UniqueName: \"kubernetes.io/projected/f6a39ae0-87ac-492a-a711-290e61bb895e-kube-api-access-2z767\") pod \"storage-provisioner\" (UID: \"f6a39ae0-87ac-492a-a711-290e61bb895e\") " pod="kube-system/storage-provisioner"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.417527    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6a39ae0-87ac-492a-a711-290e61bb895e-tmp\") pod \"storage-provisioner\" (UID: \"f6a39ae0-87ac-492a-a711-290e61bb895e\") " pod="kube-system/storage-provisioner"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.417551    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4724701c-6f0e-45ed-8fc7-70245d4fa569-config-volume\") pod \"coredns-7db6d8ff4d-7ng4v\" (UID: \"4724701c-6f0e-45ed-8fc7-70245d4fa569\") " pod="kube-system/coredns-7db6d8ff4d-7ng4v"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.417593    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2j2m\" (UniqueName: \"kubernetes.io/projected/4724701c-6f0e-45ed-8fc7-70245d4fa569-kube-api-access-k2j2m\") pod \"coredns-7db6d8ff4d-7ng4v\" (UID: \"4724701c-6f0e-45ed-8fc7-70245d4fa569\") " pod="kube-system/coredns-7db6d8ff4d-7ng4v"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.417618    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/375157ef-5af0-41b9-8ed9-162e5a88c679-config-volume\") pod \"coredns-7db6d8ff4d-nx5xs\" (UID: \"375157ef-5af0-41b9-8ed9-162e5a88c679\") " pod="kube-system/coredns-7db6d8ff4d-nx5xs"
	Jun 21 18:27:39 ha-406291 kubelet[1367]: I0621 18:27:39.417651    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp48q\" (UniqueName: \"kubernetes.io/projected/375157ef-5af0-41b9-8ed9-162e5a88c679-kube-api-access-mp48q\") pod \"coredns-7db6d8ff4d-nx5xs\" (UID: \"375157ef-5af0-41b9-8ed9-162e5a88c679\") " pod="kube-system/coredns-7db6d8ff4d-nx5xs"
	Jun 21 18:27:40 ha-406291 kubelet[1367]: I0621 18:27:40.573990    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7ng4v" podStartSLOduration=3.5739735489999997 podStartE2EDuration="3.573973549s" podCreationTimestamp="2024-06-21 18:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-21 18:27:40.57266985 +0000 UTC m=+16.273668099" watchObservedRunningTime="2024-06-21 18:27:40.573973549 +0000 UTC m=+16.274971799"
	Jun 21 18:27:40 ha-406291 kubelet[1367]: I0621 18:27:40.641300    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nx5xs" podStartSLOduration=3.641266757 podStartE2EDuration="3.641266757s" podCreationTimestamp="2024-06-21 18:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-21 18:27:40.619760797 +0000 UTC m=+16.320759047" watchObservedRunningTime="2024-06-21 18:27:40.641266757 +0000 UTC m=+16.342265008"
	Jun 21 18:27:40 ha-406291 kubelet[1367]: I0621 18:27:40.642281    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.642263669 podStartE2EDuration="3.642263669s" podCreationTimestamp="2024-06-21 18:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-21 18:27:40.640882488 +0000 UTC m=+16.341880738" watchObservedRunningTime="2024-06-21 18:27:40.642263669 +0000 UTC m=+16.343261933"
	Jun 21 18:28:24 ha-406291 kubelet[1367]: E0621 18:28:24.484327    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:28:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:28:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:28:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:28:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b] <==
	I0621 18:27:40.053572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0621 18:27:40.071388       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0621 18:27:40.071477       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0621 18:27:40.092555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0621 18:27:40.093079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-406291_9408dd1b-5b4e-4652-aac5-9de4270d5daf!
	I0621 18:27:40.092824       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a538f5e-15b2-4fb1-aabe-7ae7b744ce8d", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-406291_9408dd1b-5b4e-4652-aac5-9de4270d5daf became leader
	I0621 18:27:40.194107       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-406291_9408dd1b-5b4e-4652-aac5-9de4270d5daf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (134.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (692.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- rollout status deployment/busybox
E0621 18:30:54.861777   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:54.867541   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:54.877873   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:54.898123   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:54.938445   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:55.018767   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:55.179196   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:55.499829   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:56.140856   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:57.421502   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:30:59.982566   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:31:05.103011   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:31:15.343196   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:31:35.823688   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:32:16.784575   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:33:38.706547   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:35:54.862246   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 18:36:22.549445   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-406291 -- rollout status deployment/busybox: exit status 1 (10m3.964202783s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-drm4v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-drm4v -- nslookup kubernetes.io: exit status 1 (109.65675ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-drm4v does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-drm4v could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-p2c87 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-p2c87 -- nslookup kubernetes.io: exit status 1 (113.358488ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-p2c87 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-p2c87 could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-qvl48 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-drm4v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-drm4v -- nslookup kubernetes.default: exit status 1 (111.132675ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-drm4v does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-drm4v could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-p2c87 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-p2c87 -- nslookup kubernetes.default: exit status 1 (115.920394ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-p2c87 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-p2c87 could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-qvl48 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-drm4v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-drm4v -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (108.35033ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-drm4v does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-drm4v could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-p2c87 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-p2c87 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (110.372588ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-p2c87 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-p2c87 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-qvl48 -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.213939312s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-620822 image ls           | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	| delete  | -p functional-620822                 | functional-620822 | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC | 21 Jun 24 18:26 UTC |
	| start   | -p ha-406291 --wait=true             | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:26 UTC |                     |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=kvm2                        |                   |         |         |                     |                     |
	|         | --container-runtime=crio             |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- apply -f             | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:28 UTC | 21 Jun 24 18:28 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- rollout status       | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:28 UTC |                     |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291         | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.743860579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995227743748936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8af28fe0-b229-48dd-b79c-48a0f0d9ebdc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.744432360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f04a5ea-0598-4577-8b97-c31948fd9f13 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.744495749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f04a5ea-0598-4577-8b97-c31948fd9f13 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.744765521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f04a5ea-0598-4577-8b97-c31948fd9f13 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.785994365Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5330818-1905-409f-b9bb-a7f733555432 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.786082068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5330818-1905-409f-b9bb-a7f733555432 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.787301681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8603ca5-1e45-4742-b5e8-d4d0ef834c11 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.787721604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995227787698595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8603ca5-1e45-4742-b5e8-d4d0ef834c11 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.788304567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82f6e793-7f70-4e16-88a0-db3bd744c216 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.788352828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82f6e793-7f70-4e16-88a0-db3bd744c216 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.788625939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82f6e793-7f70-4e16-88a0-db3bd744c216 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.830551566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45cffa11-5784-4bb4-a701-fc424781d72e name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.830635165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45cffa11-5784-4bb4-a701-fc424781d72e name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.832552963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6037926d-240a-42fe-a5a4-b59cc5c2d8e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.833019320Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995227832978068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6037926d-240a-42fe-a5a4-b59cc5c2d8e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.833602851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b190fff-b2cb-4c6b-9a91-17c05bcaa300 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.833688896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b190fff-b2cb-4c6b-9a91-17c05bcaa300 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.833937698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b190fff-b2cb-4c6b-9a91-17c05bcaa300 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.869592473Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0744e98-da48-4141-8d6a-f2c5a13cb128 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.869688460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0744e98-da48-4141-8d6a-f2c5a13cb128 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.870779860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03e95be0-7bc6-43c8-8577-f3f2fbd78014 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.871278264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995227871255317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03e95be0-7bc6-43c8-8577-f3f2fbd78014 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.871837058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bbfc2ed-123f-4023-a4e4-47e279b584e0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.871904854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bbfc2ed-123f-4023-a4e4-47e279b584e0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:27 ha-406291 crio[679]: time="2024-06-21 18:40:27.872196336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bbfc2ed-123f-4023-a4e4-47e279b584e0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago      Running             busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago      Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago      Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      12 minutes ago      Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      12 minutes ago      Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     13 minutes ago      Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	[INFO] 10.244.0.4:60864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000870211s
	[INFO] 10.244.0.4:49527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014553s
	[INFO] 10.244.0.4:59987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181145s
	[INFO] 10.244.0.4:59378 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009664502s
	[INFO] 10.244.0.4:59188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181625s
	[INFO] 10.244.0.4:33100 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137671s
	[INFO] 10.244.0.4:43551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129631s
	[INFO] 10.244.0.4:59759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152418s
	[INFO] 10.244.0.4:60292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090372s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	[INFO] 10.244.0.4:38404 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013105268s
	[INFO] 10.244.0.4:49299 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.225770527s
	[INFO] 10.244.0.4:41342 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010990835s
	[INFO] 10.244.0.4:55838 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003903098s
	[INFO] 10.244.0.4:59078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163236s
	[INFO] 10.244.0.4:39541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147137s
	[INFO] 10.244.0.4:47420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120366s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:40:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                12m   kubelet          Node ha-406291 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.512305Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:27:18.939239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.939332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	
	
	==> kernel <==
	 18:40:28 up 13 min,  0 users,  load average: 0.14, 0.18, 0.11
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:38:19.456643       1 main.go:227] handling current node
	I0621 18:38:29.460573       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:38:29.460617       1 main.go:227] handling current node
	I0621 18:38:39.464813       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:38:39.464932       1 main.go:227] handling current node
	I0621 18:38:49.476962       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:38:49.477180       1 main.go:227] handling current node
	I0621 18:38:59.489837       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:38:59.489986       1 main.go:227] handling current node
	I0621 18:39:09.501218       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:09.501252       1 main.go:227] handling current node
	I0621 18:39:19.504588       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:19.504638       1 main.go:227] handling current node
	I0621 18:39:29.510970       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:29.511181       1 main.go:227] handling current node
	I0621 18:39:39.514989       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:39.515025       1 main.go:227] handling current node
	I0621 18:39:49.520764       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:49.520908       1 main.go:227] handling current node
	I0621 18:39:59.524302       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:59.524430       1 main.go:227] handling current node
	I0621 18:40:09.536871       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:09.536951       1 main.go:227] handling current node
	I0621 18:40:19.546045       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:19.546228       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.223679       1 cache.go:39] Caches are synced for autoregister controller
	I0621 18:27:21.228827       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0621 18:40:26.217258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52318: use of closed network connection
	E0621 18:40:26.646809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52394: use of closed network connection
	E0621 18:40:27.039177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52460: use of closed network connection
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:36.996032       1 shared_informer.go:320] Caches are synced for cronjob
	I0621 18:27:36.997228       1 shared_informer.go:320] Caches are synced for stateful set
	I0621 18:27:37.047455       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 18:27:37.059247       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 18:27:37.506333       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:27:37.559310       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:27:37.559392       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0621 18:27:37.600276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="666.508123ms"
	I0621 18:27:37.660728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.34673ms"
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:36:24 ha-406291 kubelet[1367]: E0621 18:36:24.482853    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:37:24 ha-406291 kubelet[1367]: E0621 18:37:24.483671    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:38:24 ha-406291 kubelet[1367]: E0621 18:38:24.483473    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:39:24 ha-406291 kubelet[1367]: E0621 18:39:24.484210    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:40:24 ha-406291 kubelet[1367]: E0621 18:40:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b] <==
	I0621 18:27:40.053572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0621 18:27:40.071388       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0621 18:27:40.071477       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0621 18:27:40.092555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0621 18:27:40.093079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-406291_9408dd1b-5b4e-4652-aac5-9de4270d5daf!
	I0621 18:27:40.092824       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a538f5e-15b2-4fb1-aabe-7ae7b744ce8d", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-406291_9408dd1b-5b4e-4652-aac5-9de4270d5daf became leader
	I0621 18:27:40.194107       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-406291_9408dd1b-5b4e-4652-aac5-9de4270d5daf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-drm4v busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-drm4v busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-drm4v busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-drm4v
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82b4g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-82b4g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  65s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  65s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (692.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-drm4v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-drm4v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (111.609956ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-drm4v does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-drm4v could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-p2c87 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-p2c87 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (109.294146ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-p2c87 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-p2c87 could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-qvl48 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406291 -- exec busybox-fc5497c4f-qvl48 -- sh -c "ping -c 1 192.168.39.1"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.152477877s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.285610062Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-qvl48,Uid:59f123aa-60d0-4d29-b58e-cb9a43c26895,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994537417860566,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:28:57.107715447Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f6a39ae0-87ac-492a-a711-290e61bb895e,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1718994459650788102,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-21T18:27:39.331926430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7ng4v,Uid:4724701c-6f0e-45ed-8fc7-70245d4fa569,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994459636285025,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:39.324840171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nx5xs,Uid:375157ef-5af0-41b9-8ed9-162e5a88c679,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1718994459635123081,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:39.328881687Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&PodSandboxMetadata{Name:kube-proxy-xnbqj,Uid:11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994457732197222,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-06-21T18:27:37.424597593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&PodSandboxMetadata{Name:kindnet-vnds7,Uid:e921d86f-0ac3-413e-9e85-e809139ca210,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994457715084104,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:37.400904877Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-406291,Uid:81efe8b097b0aaeaaac87f9a6e2dfe3b,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1718994437888590878,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 81efe8b097b0aaeaaac87f9a6e2dfe3b,kubernetes.io/config.seen: 2024-06-21T18:27:17.383181217Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-406291,Uid:29bf44d365a415a68be28c9aad205c23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437887303918,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{kubernetes.io/config.hash: 29bf
44d365a415a68be28c9aad205c23,kubernetes.io/config.seen: 2024-06-21T18:27:17.383182123Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&PodSandboxMetadata{Name:etcd-ha-406291,Uid:28eb1f9a7974972f95837a71475ffe97,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437864857022,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.198:2379,kubernetes.io/config.hash: 28eb1f9a7974972f95837a71475ffe97,kubernetes.io/config.seen: 2024-06-21T18:27:17.383174241Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&PodSandboxMetadata{Name:kube-a
piserver-ha-406291,Uid:ac2d2e5dadb6d48084ee46b3119245c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437841913023,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.198:8443,kubernetes.io/config.hash: ac2d2e5dadb6d48084ee46b3119245c5,kubernetes.io/config.seen: 2024-06-21T18:27:17.383178563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-406291,Uid:8bd582f38b9812a77200f468c3cf9c0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437841113621,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8bd582f38b9812a77200f468c3cf9c0d,kubernetes.io/config.seen: 2024-06-21T18:27:17.383179836Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=12366c2e-21c3-4ac7-b3c5-75382860bdd0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.286191582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b7ad7d0-c82c-489f-82b4-e2a6a2998641 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.286237320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b7ad7d0-c82c-489f-82b4-e2a6a2998641 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.286451997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b7ad7d0-c82c-489f-82b4-e2a6a2998641 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.322399822Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0205888-dc2b-4b56-ae1c-6161bba26158 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.322498691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0205888-dc2b-4b56-ae1c-6161bba26158 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.323723751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1a12022-d4c0-4383-b3e2-e5ad652507de name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.324115529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995230324094603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1a12022-d4c0-4383-b3e2-e5ad652507de name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.324830594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fc7a96e-c923-4f95-b47b-3b6a4cbcfc91 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.324884711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fc7a96e-c923-4f95-b47b-3b6a4cbcfc91 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.325110764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fc7a96e-c923-4f95-b47b-3b6a4cbcfc91 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.359589076Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f232581-7109-4250-80a2-3e2fd9f9f472 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.359663390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f232581-7109-4250-80a2-3e2fd9f9f472 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.360722579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7be35a1b-b90e-4170-8caf-99089ad20f3d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.361360640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995230361325778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7be35a1b-b90e-4170-8caf-99089ad20f3d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.361827654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ae8b13d-4cea-4938-afec-d45ebe6eceb9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.361877352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ae8b13d-4cea-4938-afec-d45ebe6eceb9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.362275462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ae8b13d-4cea-4938-afec-d45ebe6eceb9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.398448205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e103660-1d8b-4fd9-989c-617fa5108f60 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.398543322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e103660-1d8b-4fd9-989c-617fa5108f60 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.399739913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93715cbb-b93e-4485-b46a-065922f02215 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.400232031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995230400205121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93715cbb-b93e-4485-b46a-065922f02215 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.400909243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb253e6c-1238-446b-b41f-f84f6379a954 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.400967146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb253e6c-1238-446b-b41f-f84f6379a954 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:40:30 ha-406291 crio[679]: time="2024-06-21 18:40:30.401388302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb253e6c-1238-446b-b41f-f84f6379a954 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago      Running             busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago      Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago      Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      12 minutes ago      Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      12 minutes ago      Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     13 minutes ago      Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	[INFO] 10.244.0.4:60864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000870211s
	[INFO] 10.244.0.4:49527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014553s
	[INFO] 10.244.0.4:59987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181145s
	[INFO] 10.244.0.4:59378 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009664502s
	[INFO] 10.244.0.4:59188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181625s
	[INFO] 10.244.0.4:33100 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137671s
	[INFO] 10.244.0.4:43551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129631s
	[INFO] 10.244.0.4:59759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152418s
	[INFO] 10.244.0.4:60292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090372s
	[INFO] 10.244.0.4:47967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093215s
	[INFO] 10.244.0.4:44642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175452s
	[INFO] 10.244.0.4:49677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070108s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	[INFO] 10.244.0.4:38404 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013105268s
	[INFO] 10.244.0.4:49299 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.225770527s
	[INFO] 10.244.0.4:41342 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010990835s
	[INFO] 10.244.0.4:55838 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003903098s
	[INFO] 10.244.0.4:59078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163236s
	[INFO] 10.244.0.4:39541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147137s
	[INFO] 10.244.0.4:47420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120366s
	[INFO] 10.244.0.4:54009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000255172s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:40:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                12m   kubelet          Node ha-406291 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.512305Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:27:18.939239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.939332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	
	
	==> kernel <==
	 18:40:30 up 13 min,  0 users,  load average: 0.14, 0.18, 0.11
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:38:29.460617       1 main.go:227] handling current node
	I0621 18:38:39.464813       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:38:39.464932       1 main.go:227] handling current node
	I0621 18:38:49.476962       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:38:49.477180       1 main.go:227] handling current node
	I0621 18:38:59.489837       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:38:59.489986       1 main.go:227] handling current node
	I0621 18:39:09.501218       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:09.501252       1 main.go:227] handling current node
	I0621 18:39:19.504588       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:19.504638       1 main.go:227] handling current node
	I0621 18:39:29.510970       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:29.511181       1 main.go:227] handling current node
	I0621 18:39:39.514989       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:39.515025       1 main.go:227] handling current node
	I0621 18:39:49.520764       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:49.520908       1 main.go:227] handling current node
	I0621 18:39:59.524302       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:59.524430       1 main.go:227] handling current node
	I0621 18:40:09.536871       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:09.536951       1 main.go:227] handling current node
	I0621 18:40:19.546045       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:19.546228       1 main.go:227] handling current node
	I0621 18:40:29.557033       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:29.557254       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0621 18:40:26.217258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52318: use of closed network connection
	E0621 18:40:26.646809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52394: use of closed network connection
	E0621 18:40:27.039177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52460: use of closed network connection
	E0621 18:40:29.475531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0621 18:40:29.631306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52614: use of closed network connection
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:36.996032       1 shared_informer.go:320] Caches are synced for cronjob
	I0621 18:27:36.997228       1 shared_informer.go:320] Caches are synced for stateful set
	I0621 18:27:37.047455       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 18:27:37.059247       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 18:27:37.506333       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:27:37.559310       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:27:37.559392       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0621 18:27:37.600276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="666.508123ms"
	I0621 18:27:37.660728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.34673ms"
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:36:24 ha-406291 kubelet[1367]: E0621 18:36:24.482853    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:37:24 ha-406291 kubelet[1367]: E0621 18:37:24.483671    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:38:24 ha-406291 kubelet[1367]: E0621 18:38:24.483473    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:39:24 ha-406291 kubelet[1367]: E0621 18:39:24.484210    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:40:24 ha-406291 kubelet[1367]: E0621 18:40:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b] <==
	I0621 18:27:40.053572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0621 18:27:40.071388       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0621 18:27:40.071477       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0621 18:27:40.092555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0621 18:27:40.093079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-406291_9408dd1b-5b4e-4652-aac5-9de4270d5daf!
	I0621 18:27:40.092824       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a538f5e-15b2-4fb1-aabe-7ae7b744ce8d", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-406291_9408dd1b-5b4e-4652-aac5-9de4270d5daf became leader
	I0621 18:27:40.194107       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-406291_9408dd1b-5b4e-4652-aac5-9de4270d5daf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-drm4v busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-drm4v busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-drm4v busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-drm4v
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82b4g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-82b4g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  67s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  67s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (2.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-406291 -v=7 --alsologtostderr
E0621 18:40:54.862097   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-406291 -v=7 --alsologtostderr: (41.472469319s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (569.712963ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:41:13.042289   34358 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:41:13.042403   34358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:41:13.042412   34358 out.go:304] Setting ErrFile to fd 2...
	I0621 18:41:13.042416   34358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:41:13.042635   34358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:41:13.042789   34358 out.go:298] Setting JSON to false
	I0621 18:41:13.042811   34358 mustload.go:65] Loading cluster: ha-406291
	I0621 18:41:13.042938   34358 notify.go:220] Checking for updates...
	I0621 18:41:13.043328   34358 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:41:13.043348   34358 status.go:255] checking status of ha-406291 ...
	I0621 18:41:13.043775   34358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:13.043837   34358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:13.062727   34358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0621 18:41:13.063130   34358 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:13.063768   34358 main.go:141] libmachine: Using API Version  1
	I0621 18:41:13.063795   34358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:13.064116   34358 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:13.064330   34358 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:41:13.065910   34358 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:41:13.065925   34358 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:41:13.066291   34358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:13.066344   34358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:13.080672   34358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37199
	I0621 18:41:13.081078   34358 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:13.081505   34358 main.go:141] libmachine: Using API Version  1
	I0621 18:41:13.081525   34358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:13.081826   34358 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:13.082036   34358 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:41:13.084821   34358 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:13.085229   34358 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:41:13.085248   34358 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:13.085365   34358 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:41:13.085636   34358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:13.085678   34358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:13.101006   34358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0621 18:41:13.101406   34358 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:13.101830   34358 main.go:141] libmachine: Using API Version  1
	I0621 18:41:13.101853   34358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:13.102186   34358 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:13.102360   34358 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:41:13.102511   34358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:41:13.102542   34358 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:41:13.105353   34358 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:13.105717   34358 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:41:13.105744   34358 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:13.105878   34358 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:41:13.106045   34358 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:41:13.106202   34358 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:41:13.106338   34358 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:41:13.181662   34358 ssh_runner.go:195] Run: systemctl --version
	I0621 18:41:13.187493   34358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:41:13.202052   34358 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:41:13.202080   34358 api_server.go:166] Checking apiserver status ...
	I0621 18:41:13.202114   34358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:41:13.218777   34358 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:41:13.227983   34358 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:41:13.228035   34358 ssh_runner.go:195] Run: ls
	I0621 18:41:13.232168   34358 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:41:13.236222   34358 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:41:13.236245   34358 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:41:13.236254   34358 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:41:13.236273   34358 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:41:13.236573   34358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:13.236612   34358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:13.251658   34358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41883
	I0621 18:41:13.252098   34358 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:13.252604   34358 main.go:141] libmachine: Using API Version  1
	I0621 18:41:13.252623   34358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:13.252931   34358 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:13.253131   34358 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:41:13.254860   34358 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:41:13.254874   34358 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:41:13.255148   34358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:13.255184   34358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:13.272414   34358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41903
	I0621 18:41:13.272792   34358 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:13.273250   34358 main.go:141] libmachine: Using API Version  1
	I0621 18:41:13.273276   34358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:13.273572   34358 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:13.273758   34358 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:41:13.276736   34358 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:13.277156   34358 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:13.277174   34358 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:13.277394   34358 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:41:13.277784   34358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:13.277865   34358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:13.294059   34358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0621 18:41:13.294518   34358 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:13.294975   34358 main.go:141] libmachine: Using API Version  1
	I0621 18:41:13.294997   34358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:13.295285   34358 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:13.295453   34358 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:13.295621   34358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:41:13.295640   34358 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:13.298354   34358 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:13.298701   34358 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:13.298726   34358 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:13.298886   34358 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:13.299065   34358 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:13.299181   34358 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:13.299349   34358 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:41:13.385275   34358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:41:13.399858   34358 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:41:13.399882   34358 api_server.go:166] Checking apiserver status ...
	I0621 18:41:13.399909   34358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:41:13.411663   34358 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:41:13.411689   34358 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:41:13.411700   34358 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:41:13.411718   34358 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:41:13.412045   34358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:13.412079   34358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:13.427530   34358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
	I0621 18:41:13.427956   34358 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:13.428877   34358 main.go:141] libmachine: Using API Version  1
	I0621 18:41:13.428898   34358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:13.429221   34358 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:13.429421   34358 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:41:13.430978   34358 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:41:13.430997   34358 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:41:13.431363   34358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:13.431404   34358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:13.446346   34358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32947
	I0621 18:41:13.446728   34358 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:13.447163   34358 main.go:141] libmachine: Using API Version  1
	I0621 18:41:13.447181   34358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:13.447436   34358 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:13.447624   34358 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:41:13.450153   34358 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:13.450494   34358 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:41:13.450527   34358 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:13.450626   34358 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:41:13.450908   34358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:13.450940   34358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:13.465769   34358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0621 18:41:13.466190   34358 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:13.466570   34358 main.go:141] libmachine: Using API Version  1
	I0621 18:41:13.466588   34358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:13.466877   34358 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:13.467044   34358 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:41:13.467225   34358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:41:13.467259   34358 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:41:13.469636   34358 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:13.470108   34358 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:41:13.470131   34358 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:13.470277   34358 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:41:13.470448   34358 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:41:13.470580   34358 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:41:13.470704   34358 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:41:13.556950   34358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:41:13.570245   34358 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.118405288s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.154546657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995274154523149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c25f27ba-12e9-4684-85f2-7d45ed9de683 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.155083254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18fb4459-a090-4c75-8b40-9d040cd5c4bd name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.155185914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18fb4459-a090-4c75-8b40-9d040cd5c4bd name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.155471664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18fb4459-a090-4c75-8b40-9d040cd5c4bd name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.193695740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11258ef1-e796-4a5d-8c30-c0f1ebdd0966 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.193793587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11258ef1-e796-4a5d-8c30-c0f1ebdd0966 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.195382313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3dd91e89-c481-42ca-8b81-8194c687a18f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.195793605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995274195772041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dd91e89-c481-42ca-8b81-8194c687a18f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.196338128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0724e9d1-2d23-4c77-a771-867017845928 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.196422542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0724e9d1-2d23-4c77-a771-867017845928 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.196811152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0724e9d1-2d23-4c77-a771-867017845928 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.236771042Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1368b88-2059-4858-a3de-03e92ff472ae name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.236842315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1368b88-2059-4858-a3de-03e92ff472ae name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.238360222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=323533dd-1929-4487-b167-07bd0a7bdbc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.238778843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995274238755371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=323533dd-1929-4487-b167-07bd0a7bdbc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.239345700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8428cbb8-175f-4d51-a07d-63bbf91c39af name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.239419555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8428cbb8-175f-4d51-a07d-63bbf91c39af name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.239709293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8428cbb8-175f-4d51-a07d-63bbf91c39af name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.275240850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed39bdfc-eb18-41ed-9bf6-2c631ea35a9e name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.275322146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed39bdfc-eb18-41ed-9bf6-2c631ea35a9e name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.276401955Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f1096c5-9a8b-4290-93e9-fdfc02364ace name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.276787019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995274276767608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f1096c5-9a8b-4290-93e9-fdfc02364ace name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.277287811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d24f8f14-62e7-47ce-8191-a438bb2bd31a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.277339819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d24f8f14-62e7-47ce-8191-a438bb2bd31a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:14 ha-406291 crio[679]: time="2024-06-21 18:41:14.277568153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d24f8f14-62e7-47ce-8191-a438bb2bd31a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Running             busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      13 minutes ago      Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     13 minutes ago      Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	[INFO] 10.244.0.4:60864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000870211s
	[INFO] 10.244.0.4:49527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014553s
	[INFO] 10.244.0.4:59987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181145s
	[INFO] 10.244.0.4:59378 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009664502s
	[INFO] 10.244.0.4:59188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181625s
	[INFO] 10.244.0.4:33100 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137671s
	[INFO] 10.244.0.4:43551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129631s
	[INFO] 10.244.0.4:59759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152418s
	[INFO] 10.244.0.4:60292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090372s
	[INFO] 10.244.0.4:47967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093215s
	[INFO] 10.244.0.4:44642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175452s
	[INFO] 10.244.0.4:49677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070108s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	[INFO] 10.244.0.4:38404 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013105268s
	[INFO] 10.244.0.4:49299 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.225770527s
	[INFO] 10.244.0.4:41342 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010990835s
	[INFO] 10.244.0.4:55838 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003903098s
	[INFO] 10.244.0.4:59078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163236s
	[INFO] 10.244.0.4:39541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147137s
	[INFO] 10.244.0.4:47420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120366s
	[INFO] 10.244.0.4:54009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000255172s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-406291 status is now: NodeReady
	
	
	Name:               ha-406291-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T18_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    ha-406291-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aeb6d6b65b246d89e229cf308cb4c9a
	  System UUID:                7aeb6d6b-65b2-46d8-9e22-9cf308cb4c9a
	  Boot ID:                    077bb108-4737-40c3-9892-3695b5a49d4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drm4v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-xrm6w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13s
	  kube-system                 kube-proxy-vknv4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 8s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)  kubelet          Node ha-406291-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node ha-406291-m03 event: Registered Node ha-406291-m03 in Controller
	  Normal  NodeReady                4s                 kubelet          Node ha-406291-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.93929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.939332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:41:14 up 14 min,  0 users,  load average: 0.39, 0.24, 0.13
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:39:29.510970       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:29.511181       1 main.go:227] handling current node
	I0621 18:39:39.514989       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:39.515025       1 main.go:227] handling current node
	I0621 18:39:49.520764       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:49.520908       1 main.go:227] handling current node
	I0621 18:39:59.524302       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:59.524430       1 main.go:227] handling current node
	I0621 18:40:09.536871       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:09.536951       1 main.go:227] handling current node
	I0621 18:40:19.546045       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:19.546228       1 main.go:227] handling current node
	I0621 18:40:29.557033       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:29.557254       1 main.go:227] handling current node
	I0621 18:40:39.561036       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:39.561193       1 main.go:227] handling current node
	I0621 18:40:49.569235       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:49.569361       1 main.go:227] handling current node
	I0621 18:40:59.579375       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:59.579516       1 main.go:227] handling current node
	I0621 18:41:09.583520       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:41:09.583631       1 main.go:227] handling current node
	I0621 18:41:09.583661       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:41:09.583679       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:41:09.583931       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.193 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0621 18:40:26.217258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52318: use of closed network connection
	E0621 18:40:26.646809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52394: use of closed network connection
	E0621 18:40:27.039177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52460: use of closed network connection
	E0621 18:40:29.475531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0621 18:40:29.631306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52614: use of closed network connection
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:36:24 ha-406291 kubelet[1367]: E0621 18:36:24.482853    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:37:24 ha-406291 kubelet[1367]: E0621 18:37:24.483671    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:38:24 ha-406291 kubelet[1367]: E0621 18:38:24.483473    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:39:24 ha-406291 kubelet[1367]: E0621 18:39:24.484210    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:40:24 ha-406291 kubelet[1367]: E0621 18:40:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  111s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  5s                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (43.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:304: expected profile "ha-406291" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-406291\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-406291\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPor
t\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-406291\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.198\",\"Port\":8443,\"KubernetesVersion\":
\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.193\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false
,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMet
rics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-406291" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-406291\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-406291\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-406291\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.198\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.193\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,
\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false
,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.098235103s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.383862542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9edd20ed-2665-4aa1-bb05-4b1c366528ea name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.419915161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a6c0f2a-f685-4b91-870a-4abc7f216888 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.419987615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a6c0f2a-f685-4b91-870a-4abc7f216888 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.421183971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71c29b89-7578-49e5-b4cb-6f35fb4d24e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.421586175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995276421566126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71c29b89-7578-49e5-b4cb-6f35fb4d24e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.422161698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=978b48a1-9e37-4183-81d1-9c8f2ee9597a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.422341454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=978b48a1-9e37-4183-81d1-9c8f2ee9597a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.422582001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=978b48a1-9e37-4183-81d1-9c8f2ee9597a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.433696161Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bb719cb-a61b-4b0b-b502-a1270ee844a5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.434056439Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-qvl48,Uid:59f123aa-60d0-4d29-b58e-cb9a43c26895,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994537417860566,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:28:57.107715447Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f6a39ae0-87ac-492a-a711-290e61bb895e,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1718994459650788102,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-21T18:27:39.331926430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7ng4v,Uid:4724701c-6f0e-45ed-8fc7-70245d4fa569,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994459636285025,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:39.324840171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nx5xs,Uid:375157ef-5af0-41b9-8ed9-162e5a88c679,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1718994459635123081,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:39.328881687Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&PodSandboxMetadata{Name:kube-proxy-xnbqj,Uid:11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994457732197222,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-06-21T18:27:37.424597593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&PodSandboxMetadata{Name:kindnet-vnds7,Uid:e921d86f-0ac3-413e-9e85-e809139ca210,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994457715084104,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:37.400904877Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-406291,Uid:81efe8b097b0aaeaaac87f9a6e2dfe3b,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1718994437888590878,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 81efe8b097b0aaeaaac87f9a6e2dfe3b,kubernetes.io/config.seen: 2024-06-21T18:27:17.383181217Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-406291,Uid:29bf44d365a415a68be28c9aad205c23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437887303918,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{kubernetes.io/config.hash: 29bf
44d365a415a68be28c9aad205c23,kubernetes.io/config.seen: 2024-06-21T18:27:17.383182123Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&PodSandboxMetadata{Name:etcd-ha-406291,Uid:28eb1f9a7974972f95837a71475ffe97,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437864857022,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.198:2379,kubernetes.io/config.hash: 28eb1f9a7974972f95837a71475ffe97,kubernetes.io/config.seen: 2024-06-21T18:27:17.383174241Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&PodSandboxMetadata{Name:kube-a
piserver-ha-406291,Uid:ac2d2e5dadb6d48084ee46b3119245c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437841913023,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.198:8443,kubernetes.io/config.hash: ac2d2e5dadb6d48084ee46b3119245c5,kubernetes.io/config.seen: 2024-06-21T18:27:17.383178563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-406291,Uid:8bd582f38b9812a77200f468c3cf9c0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437841113621,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8bd582f38b9812a77200f468c3cf9c0d,kubernetes.io/config.seen: 2024-06-21T18:27:17.383179836Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1bb719cb-a61b-4b0b-b502-a1270ee844a5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.434668046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f288386b-7e72-4c27-9a6a-4c8794085eed name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.434726079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f288386b-7e72-4c27-9a6a-4c8794085eed name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.434973431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f288386b-7e72-4c27-9a6a-4c8794085eed name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.449452013Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8f6ed2a8-1ea6-4388-928f-2848d2bb70de name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.449712612Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-qvl48,Uid:59f123aa-60d0-4d29-b58e-cb9a43c26895,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994537417860566,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:28:57.107715447Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f6a39ae0-87ac-492a-a711-290e61bb895e,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1718994459650788102,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-21T18:27:39.331926430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7ng4v,Uid:4724701c-6f0e-45ed-8fc7-70245d4fa569,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994459636285025,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:39.324840171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nx5xs,Uid:375157ef-5af0-41b9-8ed9-162e5a88c679,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1718994459635123081,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:39.328881687Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&PodSandboxMetadata{Name:kube-proxy-xnbqj,Uid:11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994457732197222,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-06-21T18:27:37.424597593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&PodSandboxMetadata{Name:kindnet-vnds7,Uid:e921d86f-0ac3-413e-9e85-e809139ca210,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994457715084104,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T18:27:37.400904877Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-406291,Uid:81efe8b097b0aaeaaac87f9a6e2dfe3b,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1718994437888590878,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 81efe8b097b0aaeaaac87f9a6e2dfe3b,kubernetes.io/config.seen: 2024-06-21T18:27:17.383181217Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-406291,Uid:29bf44d365a415a68be28c9aad205c23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437887303918,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{kubernetes.io/config.hash: 29bf
44d365a415a68be28c9aad205c23,kubernetes.io/config.seen: 2024-06-21T18:27:17.383182123Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&PodSandboxMetadata{Name:etcd-ha-406291,Uid:28eb1f9a7974972f95837a71475ffe97,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437864857022,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.198:2379,kubernetes.io/config.hash: 28eb1f9a7974972f95837a71475ffe97,kubernetes.io/config.seen: 2024-06-21T18:27:17.383174241Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&PodSandboxMetadata{Name:kube-a
piserver-ha-406291,Uid:ac2d2e5dadb6d48084ee46b3119245c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437841913023,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.198:8443,kubernetes.io/config.hash: ac2d2e5dadb6d48084ee46b3119245c5,kubernetes.io/config.seen: 2024-06-21T18:27:17.383178563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-406291,Uid:8bd582f38b9812a77200f468c3cf9c0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718994437841113621,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8bd582f38b9812a77200f468c3cf9c0d,kubernetes.io/config.seen: 2024-06-21T18:27:17.383179836Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8f6ed2a8-1ea6-4388-928f-2848d2bb70de name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.450386263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afc5a0d4-055a-4aad-8d1c-b3f42397b99c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.450450148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afc5a0d4-055a-4aad-8d1c-b3f42397b99c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.450688712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afc5a0d4-055a-4aad-8d1c-b3f42397b99c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.467639183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d507079-6e9e-473a-a63b-a0f9109f6d80 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.467729264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d507079-6e9e-473a-a63b-a0f9109f6d80 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.474882636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfefead2-21c2-4691-80a3-cde7a5dab693 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.475544791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995276475515695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfefead2-21c2-4691-80a3-cde7a5dab693 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.476211455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d34bdae9-5bda-4fa9-8fe0-2ab406f8ba55 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.476280272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d34bdae9-5bda-4fa9-8fe0-2ab406f8ba55 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:16 ha-406291 crio[679]: time="2024-06-21 18:41:16.476931032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d34bdae9-5bda-4fa9-8fe0-2ab406f8ba55 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Running             busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      13 minutes ago      Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     13 minutes ago      Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	[INFO] 10.244.0.4:60864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000870211s
	[INFO] 10.244.0.4:49527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014553s
	[INFO] 10.244.0.4:59987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181145s
	[INFO] 10.244.0.4:59378 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009664502s
	[INFO] 10.244.0.4:59188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181625s
	[INFO] 10.244.0.4:33100 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137671s
	[INFO] 10.244.0.4:43551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129631s
	[INFO] 10.244.0.4:59759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152418s
	[INFO] 10.244.0.4:60292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090372s
	[INFO] 10.244.0.4:47967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093215s
	[INFO] 10.244.0.4:44642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175452s
	[INFO] 10.244.0.4:49677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070108s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	[INFO] 10.244.0.4:38404 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013105268s
	[INFO] 10.244.0.4:49299 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.225770527s
	[INFO] 10.244.0.4:41342 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010990835s
	[INFO] 10.244.0.4:55838 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003903098s
	[INFO] 10.244.0.4:59078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163236s
	[INFO] 10.244.0.4:39541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147137s
	[INFO] 10.244.0.4:47420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120366s
	[INFO] 10.244.0.4:54009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000255172s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-406291 status is now: NodeReady
	
	
	Name:               ha-406291-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T18_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    ha-406291-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aeb6d6b65b246d89e229cf308cb4c9a
	  System UUID:                7aeb6d6b-65b2-46d8-9e22-9cf308cb4c9a
	  Boot ID:                    077bb108-4737-40c3-9892-3695b5a49d4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drm4v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-xrm6w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15s
	  kube-system                 kube-proxy-vknv4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10s                kube-proxy       
	  Normal  NodeHasSufficientMemory  15s (x2 over 15s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15s (x2 over 15s)  kubelet          Node ha-406291-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15s (x2 over 15s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node ha-406291-m03 event: Registered Node ha-406291-m03 in Controller
	  Normal  NodeReady                6s                 kubelet          Node ha-406291-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.93929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.939332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:41:16 up 14 min,  0 users,  load average: 0.44, 0.25, 0.14
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:39:29.510970       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:29.511181       1 main.go:227] handling current node
	I0621 18:39:39.514989       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:39.515025       1 main.go:227] handling current node
	I0621 18:39:49.520764       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:49.520908       1 main.go:227] handling current node
	I0621 18:39:59.524302       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:59.524430       1 main.go:227] handling current node
	I0621 18:40:09.536871       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:09.536951       1 main.go:227] handling current node
	I0621 18:40:19.546045       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:19.546228       1 main.go:227] handling current node
	I0621 18:40:29.557033       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:29.557254       1 main.go:227] handling current node
	I0621 18:40:39.561036       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:39.561193       1 main.go:227] handling current node
	I0621 18:40:49.569235       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:49.569361       1 main.go:227] handling current node
	I0621 18:40:59.579375       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:59.579516       1 main.go:227] handling current node
	I0621 18:41:09.583520       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:41:09.583631       1 main.go:227] handling current node
	I0621 18:41:09.583661       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:41:09.583679       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:41:09.583931       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.193 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0621 18:40:26.217258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52318: use of closed network connection
	E0621 18:40:26.646809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52394: use of closed network connection
	E0621 18:40:27.039177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52460: use of closed network connection
	E0621 18:40:29.475531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0621 18:40:29.631306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52614: use of closed network connection
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:36:24 ha-406291 kubelet[1367]: E0621 18:36:24.482853    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:37:24 ha-406291 kubelet[1367]: E0621 18:37:24.483671    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:38:24 ha-406291 kubelet[1367]: E0621 18:38:24.483473    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:39:24 ha-406291 kubelet[1367]: E0621 18:39:24.484210    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:40:24 ha-406291 kubelet[1367]: E0621 18:40:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  113s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  7s                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (2.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status --output json -v=7 --alsologtostderr: exit status 2 (566.312856ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-406291","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-406291-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-406291-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:41:17.594533   34705 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:41:17.594776   34705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:41:17.594785   34705 out.go:304] Setting ErrFile to fd 2...
	I0621 18:41:17.594790   34705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:41:17.594989   34705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:41:17.595151   34705 out.go:298] Setting JSON to true
	I0621 18:41:17.595174   34705 mustload.go:65] Loading cluster: ha-406291
	I0621 18:41:17.595223   34705 notify.go:220] Checking for updates...
	I0621 18:41:17.595644   34705 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:41:17.595671   34705 status.go:255] checking status of ha-406291 ...
	I0621 18:41:17.596189   34705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:17.596232   34705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:17.611068   34705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I0621 18:41:17.611875   34705 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:17.612404   34705 main.go:141] libmachine: Using API Version  1
	I0621 18:41:17.612459   34705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:17.612783   34705 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:17.612971   34705 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:41:17.614578   34705 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:41:17.614597   34705 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:41:17.614920   34705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:17.614956   34705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:17.630989   34705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33717
	I0621 18:41:17.631402   34705 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:17.631810   34705 main.go:141] libmachine: Using API Version  1
	I0621 18:41:17.631833   34705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:17.632157   34705 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:17.632355   34705 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:41:17.635343   34705 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:17.635683   34705 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:41:17.635713   34705 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:17.635861   34705 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:41:17.636178   34705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:17.636237   34705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:17.650691   34705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I0621 18:41:17.651050   34705 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:17.651523   34705 main.go:141] libmachine: Using API Version  1
	I0621 18:41:17.651551   34705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:17.651866   34705 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:17.652063   34705 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:41:17.652273   34705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:41:17.652293   34705 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:41:17.654997   34705 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:17.655384   34705 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:41:17.655419   34705 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:17.655542   34705 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:41:17.655707   34705 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:41:17.655850   34705 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:41:17.655988   34705 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:41:17.733095   34705 ssh_runner.go:195] Run: systemctl --version
	I0621 18:41:17.739499   34705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:41:17.755049   34705 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:41:17.755078   34705 api_server.go:166] Checking apiserver status ...
	I0621 18:41:17.755113   34705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:41:17.769428   34705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:41:17.779379   34705 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:41:17.779434   34705 ssh_runner.go:195] Run: ls
	I0621 18:41:17.783769   34705 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:41:17.788692   34705 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:41:17.788718   34705 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:41:17.788735   34705 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:41:17.788761   34705 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:41:17.789064   34705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:17.789098   34705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:17.803986   34705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0621 18:41:17.804429   34705 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:17.804884   34705 main.go:141] libmachine: Using API Version  1
	I0621 18:41:17.804904   34705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:17.805251   34705 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:17.805453   34705 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:41:17.806923   34705 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:41:17.806941   34705 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:41:17.807258   34705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:17.807307   34705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:17.822776   34705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I0621 18:41:17.823236   34705 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:17.823668   34705 main.go:141] libmachine: Using API Version  1
	I0621 18:41:17.823689   34705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:17.824002   34705 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:17.824176   34705 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:41:17.826893   34705 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:17.827344   34705 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:17.827368   34705 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:17.827484   34705 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:41:17.827769   34705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:17.827803   34705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:17.842344   34705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0621 18:41:17.842765   34705 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:17.843219   34705 main.go:141] libmachine: Using API Version  1
	I0621 18:41:17.843242   34705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:17.843508   34705 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:17.843728   34705 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:17.843919   34705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:41:17.843941   34705 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:17.846453   34705 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:17.846792   34705 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:17.846815   34705 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:17.846947   34705 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:17.847096   34705 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:17.847224   34705 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:17.847396   34705 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:41:17.928567   34705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:41:17.942976   34705 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:41:17.943004   34705 api_server.go:166] Checking apiserver status ...
	I0621 18:41:17.943035   34705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:41:17.955873   34705 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:41:17.955894   34705 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:41:17.955905   34705 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:41:17.955921   34705 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:41:17.956232   34705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:17.956274   34705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:17.971915   34705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45791
	I0621 18:41:17.972344   34705 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:17.972793   34705 main.go:141] libmachine: Using API Version  1
	I0621 18:41:17.972811   34705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:17.973121   34705 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:17.973293   34705 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:41:17.974834   34705 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:41:17.974848   34705 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:41:17.975123   34705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:17.975154   34705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:17.990560   34705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0621 18:41:17.991013   34705 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:17.991480   34705 main.go:141] libmachine: Using API Version  1
	I0621 18:41:17.991505   34705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:17.991766   34705 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:17.991946   34705 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:41:17.994419   34705 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:17.994781   34705 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:41:17.994810   34705 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:17.994905   34705 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:41:17.995203   34705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:17.995257   34705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:18.010425   34705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0621 18:41:18.010807   34705 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:18.011270   34705 main.go:141] libmachine: Using API Version  1
	I0621 18:41:18.011288   34705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:18.011596   34705 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:18.012271   34705 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:41:18.012452   34705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:41:18.012471   34705 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:41:18.015284   34705 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:18.015635   34705 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:41:18.015665   34705 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:18.015867   34705 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:41:18.016017   34705 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:41:18.016135   34705 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:41:18.016277   34705 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:41:18.100730   34705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:41:18.115873   34705 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-406291 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.066035275s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.681577374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995278681550604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3a705a5-28a4-40b9-bffc-dad926e81858 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.682113440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=617a5015-d35c-4736-ba48-1bb8b73c2334 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.682202664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=617a5015-d35c-4736-ba48-1bb8b73c2334 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.682423777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=617a5015-d35c-4736-ba48-1bb8b73c2334 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.718326289Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1f9514d-b4cb-425d-bf86-4efbdf9d6f4c name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.718405587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1f9514d-b4cb-425d-bf86-4efbdf9d6f4c name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.724813589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b55edd9c-8582-4676-92fe-4010cd42fbf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.725464289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995278725438664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b55edd9c-8582-4676-92fe-4010cd42fbf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.725965341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83106a92-534c-413e-be94-8120439b76d8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.726020967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83106a92-534c-413e-be94-8120439b76d8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.726290791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83106a92-534c-413e-be94-8120439b76d8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.761221736Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1189d787-d0bb-4c43-add5-3c472f23c765 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.761296170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1189d787-d0bb-4c43-add5-3c472f23c765 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.762624179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcbf31a3-d5d5-4646-a00b-5847e299a4a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.763084037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995278763058814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcbf31a3-d5d5-4646-a00b-5847e299a4a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.763588570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b19bf30-f08e-40e4-ac31-57af0da2f17f name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.763663234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b19bf30-f08e-40e4-ac31-57af0da2f17f name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.764695192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b19bf30-f08e-40e4-ac31-57af0da2f17f name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.803843771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b77ce75-b510-4a1f-b6f1-622eabf9d527 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.803933459Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b77ce75-b510-4a1f-b6f1-622eabf9d527 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.805039379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=499e2b70-0dd9-4eac-96f7-bd8b57a51a8c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.805601846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995278805576495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=499e2b70-0dd9-4eac-96f7-bd8b57a51a8c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.806294968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=083e03c5-19f1-42a3-9877-f1b788dce1dc name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.806347738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=083e03c5-19f1-42a3-9877-f1b788dce1dc name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:18 ha-406291 crio[679]: time="2024-06-21 18:41:18.806672992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=083e03c5-19f1-42a3-9877-f1b788dce1dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Running             busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      13 minutes ago      Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     13 minutes ago      Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      14 minutes ago      Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago      Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      14 minutes ago      Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      14 minutes ago      Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	[INFO] 10.244.0.4:60864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000870211s
	[INFO] 10.244.0.4:49527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014553s
	[INFO] 10.244.0.4:59987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181145s
	[INFO] 10.244.0.4:59378 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009664502s
	[INFO] 10.244.0.4:59188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181625s
	[INFO] 10.244.0.4:33100 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137671s
	[INFO] 10.244.0.4:43551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129631s
	[INFO] 10.244.0.4:59759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152418s
	[INFO] 10.244.0.4:60292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090372s
	[INFO] 10.244.0.4:47967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093215s
	[INFO] 10.244.0.4:44642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175452s
	[INFO] 10.244.0.4:49677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070108s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	[INFO] 10.244.0.4:38404 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013105268s
	[INFO] 10.244.0.4:49299 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.225770527s
	[INFO] 10.244.0.4:41342 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010990835s
	[INFO] 10.244.0.4:55838 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003903098s
	[INFO] 10.244.0.4:59078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163236s
	[INFO] 10.244.0.4:39541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147137s
	[INFO] 10.244.0.4:47420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120366s
	[INFO] 10.244.0.4:54009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000255172s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-406291 status is now: NodeReady
	
	
	Name:               ha-406291-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T18_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    ha-406291-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aeb6d6b65b246d89e229cf308cb4c9a
	  System UUID:                7aeb6d6b-65b2-46d8-9e22-9cf308cb4c9a
	  Boot ID:                    077bb108-4737-40c3-9892-3695b5a49d4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drm4v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-xrm6w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18s
	  kube-system                 kube-proxy-vknv4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  18s (x2 over 18s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x2 over 18s)  kubelet          Node ha-406291-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x2 over 18s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node ha-406291-m03 event: Registered Node ha-406291-m03 in Controller
	  Normal  NodeReady                9s                 kubelet          Node ha-406291-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.93929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.939332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:41:19 up 14 min,  0 users,  load average: 0.44, 0.25, 0.14
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:39:29.510970       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:29.511181       1 main.go:227] handling current node
	I0621 18:39:39.514989       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:39.515025       1 main.go:227] handling current node
	I0621 18:39:49.520764       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:49.520908       1 main.go:227] handling current node
	I0621 18:39:59.524302       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:59.524430       1 main.go:227] handling current node
	I0621 18:40:09.536871       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:09.536951       1 main.go:227] handling current node
	I0621 18:40:19.546045       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:19.546228       1 main.go:227] handling current node
	I0621 18:40:29.557033       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:29.557254       1 main.go:227] handling current node
	I0621 18:40:39.561036       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:39.561193       1 main.go:227] handling current node
	I0621 18:40:49.569235       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:49.569361       1 main.go:227] handling current node
	I0621 18:40:59.579375       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:59.579516       1 main.go:227] handling current node
	I0621 18:41:09.583520       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:41:09.583631       1 main.go:227] handling current node
	I0621 18:41:09.583661       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:41:09.583679       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:41:09.583931       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.193 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0621 18:40:26.217258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52318: use of closed network connection
	E0621 18:40:26.646809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52394: use of closed network connection
	E0621 18:40:27.039177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52460: use of closed network connection
	E0621 18:40:29.475531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0621 18:40:29.631306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52614: use of closed network connection
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:36:24 ha-406291 kubelet[1367]: E0621 18:36:24.482853    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:37:24 ha-406291 kubelet[1367]: E0621 18:37:24.483671    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:38:24 ha-406291 kubelet[1367]: E0621 18:38:24.483473    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:39:24 ha-406291 kubelet[1367]: E0621 18:39:24.484210    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:40:24 ha-406291 kubelet[1367]: E0621 18:40:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  115s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  0s (x2 over 9s)     default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (2.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 node stop m02 -v=7 --alsologtostderr: (1.274950278s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 7 (410.70134ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:41:21.182564   34931 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:41:21.182809   34931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:41:21.182818   34931 out.go:304] Setting ErrFile to fd 2...
	I0621 18:41:21.182822   34931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:41:21.182972   34931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:41:21.183168   34931 out.go:298] Setting JSON to false
	I0621 18:41:21.183190   34931 mustload.go:65] Loading cluster: ha-406291
	I0621 18:41:21.183238   34931 notify.go:220] Checking for updates...
	I0621 18:41:21.183731   34931 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:41:21.183751   34931 status.go:255] checking status of ha-406291 ...
	I0621 18:41:21.184208   34931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:21.184275   34931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:21.203856   34931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0621 18:41:21.204260   34931 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:21.204847   34931 main.go:141] libmachine: Using API Version  1
	I0621 18:41:21.204872   34931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:21.205340   34931 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:21.205587   34931 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:41:21.207437   34931 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:41:21.207452   34931 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:41:21.207756   34931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:21.207795   34931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:21.223327   34931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0621 18:41:21.223726   34931 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:21.224163   34931 main.go:141] libmachine: Using API Version  1
	I0621 18:41:21.224181   34931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:21.224512   34931 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:21.224706   34931 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:41:21.227592   34931 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:21.228008   34931 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:41:21.228033   34931 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:21.228175   34931 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:41:21.228581   34931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:21.228615   34931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:21.243197   34931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35131
	I0621 18:41:21.243548   34931 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:21.243979   34931 main.go:141] libmachine: Using API Version  1
	I0621 18:41:21.243996   34931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:21.244259   34931 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:21.244394   34931 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:41:21.244583   34931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:41:21.244603   34931 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:41:21.247302   34931 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:21.247712   34931 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:41:21.247739   34931 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:21.247935   34931 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:41:21.248081   34931 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:41:21.248236   34931 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:41:21.248362   34931 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:41:21.330682   34931 ssh_runner.go:195] Run: systemctl --version
	I0621 18:41:21.336849   34931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:41:21.350410   34931 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:41:21.350442   34931 api_server.go:166] Checking apiserver status ...
	I0621 18:41:21.350471   34931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:41:21.363156   34931 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:41:21.372984   34931 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:41:21.373031   34931 ssh_runner.go:195] Run: ls
	I0621 18:41:21.377321   34931 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:41:21.381210   34931 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:41:21.381233   34931 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:41:21.381245   34931 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:41:21.381259   34931 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:41:21.381532   34931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:21.381567   34931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:21.396443   34931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0621 18:41:21.396850   34931 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:21.397313   34931 main.go:141] libmachine: Using API Version  1
	I0621 18:41:21.397341   34931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:21.397611   34931 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:21.397776   34931 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:41:21.399259   34931 status.go:330] ha-406291-m02 host status = "Stopped" (err=<nil>)
	I0621 18:41:21.399276   34931 status.go:343] host is not running, skipping remaining checks
	I0621 18:41:21.399284   34931 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:41:21.399314   34931 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:41:21.399572   34931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:21.399601   34931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:21.413984   34931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40797
	I0621 18:41:21.414438   34931 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:21.415030   34931 main.go:141] libmachine: Using API Version  1
	I0621 18:41:21.415056   34931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:21.415358   34931 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:21.415546   34931 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:41:21.416873   34931 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:41:21.416889   34931 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:41:21.417194   34931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:21.417227   34931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:21.431672   34931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0621 18:41:21.432081   34931 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:21.432531   34931 main.go:141] libmachine: Using API Version  1
	I0621 18:41:21.432558   34931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:21.432867   34931 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:21.433039   34931 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:41:21.435952   34931 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:21.436346   34931 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:41:21.436371   34931 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:21.436455   34931 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:41:21.436848   34931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:21.436898   34931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:21.451267   34931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0621 18:41:21.451640   34931 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:21.452107   34931 main.go:141] libmachine: Using API Version  1
	I0621 18:41:21.452129   34931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:21.452413   34931 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:21.452615   34931 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:41:21.452784   34931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:41:21.452805   34931 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:41:21.455278   34931 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:21.455619   34931 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:41:21.455647   34931 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:41:21.455821   34931 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:41:21.455993   34931 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:41:21.456150   34931 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:41:21.456285   34931 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:41:21.540730   34931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:41:21.553859   34931 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr": ha-406291
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-406291-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-406291-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr": ha-406291
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-406291-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-406291-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr": ha-406291
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-406291-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-406291-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr": ha-406291
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-406291-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-406291-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.0796229s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node stop m02 -v=7         | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.116624837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995282116602985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dce82aa4-a021-4cc4-bcfc-0619a2259c2b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.117598771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bdd9d31-2b20-4e84-92d2-1173ca597cbd name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.117664846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bdd9d31-2b20-4e84-92d2-1173ca597cbd name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.121062436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bdd9d31-2b20-4e84-92d2-1173ca597cbd name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.165075963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dfbcb18a-054a-46c2-a310-317f870659c9 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.165241700Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dfbcb18a-054a-46c2-a310-317f870659c9 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.166894605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=267e45c4-02b6-49c0-bc26-7c738d987c86 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.167386078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995282167360470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=267e45c4-02b6-49c0-bc26-7c738d987c86 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.168384173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28debdc8-692f-40aa-a5f0-aeb56b953c76 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.168461899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28debdc8-692f-40aa-a5f0-aeb56b953c76 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.168813537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28debdc8-692f-40aa-a5f0-aeb56b953c76 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.205368985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2741015b-2621-4258-abe1-62b04e29926c name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.205452424Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2741015b-2621-4258-abe1-62b04e29926c name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.206665890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f648ac2c-33d9-4472-b90a-408cc1a0e536 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.207062031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995282207039284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f648ac2c-33d9-4472-b90a-408cc1a0e536 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.207966734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5c01198-a235-4cbd-a93a-f0f511581062 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.208041516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5c01198-a235-4cbd-a93a-f0f511581062 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.208306652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5c01198-a235-4cbd-a93a-f0f511581062 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.243268990Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5db63a50-8e33-4915-a57b-f4e204ed22f3 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.243350090Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5db63a50-8e33-4915-a57b-f4e204ed22f3 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.244355524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61c13696-59b4-4828-bc74-647b24916665 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.244752825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995282244733178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61c13696-59b4-4828-bc74-647b24916665 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.245328326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19b01227-1bae-4542-8ae9-a47e4b38690e name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.245394547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19b01227-1bae-4542-8ae9-a47e4b38690e name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:22 ha-406291 crio[679]: time="2024-06-21 18:41:22.245616670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19b01227-1bae-4542-8ae9-a47e4b38690e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Running             busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      13 minutes ago      Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     14 minutes ago      Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      14 minutes ago      Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago      Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      14 minutes ago      Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      14 minutes ago      Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	[INFO] 10.244.0.4:60864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000870211s
	[INFO] 10.244.0.4:49527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014553s
	[INFO] 10.244.0.4:59987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181145s
	[INFO] 10.244.0.4:59378 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009664502s
	[INFO] 10.244.0.4:59188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181625s
	[INFO] 10.244.0.4:33100 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137671s
	[INFO] 10.244.0.4:43551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129631s
	[INFO] 10.244.0.4:59759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152418s
	[INFO] 10.244.0.4:60292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090372s
	[INFO] 10.244.0.4:47967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093215s
	[INFO] 10.244.0.4:44642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175452s
	[INFO] 10.244.0.4:49677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070108s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	[INFO] 10.244.0.4:38404 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013105268s
	[INFO] 10.244.0.4:49299 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.225770527s
	[INFO] 10.244.0.4:41342 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010990835s
	[INFO] 10.244.0.4:55838 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003903098s
	[INFO] 10.244.0.4:59078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163236s
	[INFO] 10.244.0.4:39541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147137s
	[INFO] 10.244.0.4:47420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120366s
	[INFO] 10.244.0.4:54009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000255172s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-406291 status is now: NodeReady
	
	
	Name:               ha-406291-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T18_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    ha-406291-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aeb6d6b65b246d89e229cf308cb4c9a
	  System UUID:                7aeb6d6b-65b2-46d8-9e22-9cf308cb4c9a
	  Boot ID:                    077bb108-4737-40c3-9892-3695b5a49d4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drm4v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-xrm6w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21s
	  kube-system                 kube-proxy-vknv4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)  kubelet          Node ha-406291-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node ha-406291-m03 event: Registered Node ha-406291-m03 in Controller
	  Normal  NodeReady                12s                kubelet          Node ha-406291-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.93929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.939332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:41:22 up 14 min,  0 users,  load average: 0.41, 0.24, 0.14
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:39:49.520764       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:49.520908       1 main.go:227] handling current node
	I0621 18:39:59.524302       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:59.524430       1 main.go:227] handling current node
	I0621 18:40:09.536871       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:09.536951       1 main.go:227] handling current node
	I0621 18:40:19.546045       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:19.546228       1 main.go:227] handling current node
	I0621 18:40:29.557033       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:29.557254       1 main.go:227] handling current node
	I0621 18:40:39.561036       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:39.561193       1 main.go:227] handling current node
	I0621 18:40:49.569235       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:49.569361       1 main.go:227] handling current node
	I0621 18:40:59.579375       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:59.579516       1 main.go:227] handling current node
	I0621 18:41:09.583520       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:41:09.583631       1 main.go:227] handling current node
	I0621 18:41:09.583661       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:41:09.583679       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:41:09.583931       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.193 Flags: [] Table: 0} 
	I0621 18:41:19.597094       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:41:19.597117       1 main.go:227] handling current node
	I0621 18:41:19.597173       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:41:19.597182       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0621 18:40:26.217258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52318: use of closed network connection
	E0621 18:40:26.646809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52394: use of closed network connection
	E0621 18:40:27.039177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52460: use of closed network connection
	E0621 18:40:29.475531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0621 18:40:29.631306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52614: use of closed network connection
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:36:24 ha-406291 kubelet[1367]: E0621 18:36:24.482853    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:36:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:36:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:37:24 ha-406291 kubelet[1367]: E0621 18:37:24.483671    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:38:24 ha-406291 kubelet[1367]: E0621 18:38:24.483473    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:39:24 ha-406291 kubelet[1367]: E0621 18:39:24.484210    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:40:24 ha-406291 kubelet[1367]: E0621 18:40:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  119s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  4s (x2 over 13s)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-406291" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-406291\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-406291\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-406291\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.198\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.193\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":fal
se,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fa
lse,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.121378168s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node stop m02 -v=7         | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.120314967Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995284120130746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cf9ffa3-e3bb-47c8-bf2c-ed4c265805d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.120743599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbfd7600-c995-4fa0-9e7f-11d811bf7404 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.120809988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbfd7600-c995-4fa0-9e7f-11d811bf7404 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.121046458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbfd7600-c995-4fa0-9e7f-11d811bf7404 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.161523013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=590652a4-dd0b-4d9a-8409-8ebc4f8f3d24 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.161615553Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=590652a4-dd0b-4d9a-8409-8ebc4f8f3d24 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.163411032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7464d7c-1651-4536-ab92-0434e3001575 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.163840025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995284163816989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7464d7c-1651-4536-ab92-0434e3001575 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.164631323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=755c81bc-dca4-48b7-9123-75d7f2e79e8a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.164700191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=755c81bc-dca4-48b7-9123-75d7f2e79e8a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.164943672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=755c81bc-dca4-48b7-9123-75d7f2e79e8a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.201252976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a65ebc0-dce5-4ce5-842d-9cb2316975d1 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.201387716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a65ebc0-dce5-4ce5-842d-9cb2316975d1 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.202426083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abb7442b-e322-4ea1-b094-2fe650014155 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.202877984Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995284202855550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abb7442b-e322-4ea1-b094-2fe650014155 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.203328772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a1186c6-8872-4442-9ae6-81bc0aed7eb9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.203384404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a1186c6-8872-4442-9ae6-81bc0aed7eb9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.203599257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a1186c6-8872-4442-9ae6-81bc0aed7eb9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.238469605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dfffac26-6d37-4c7b-b5b9-4f9caec8cf27 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.238571731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dfffac26-6d37-4c7b-b5b9-4f9caec8cf27 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.239621794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3abda4e0-196a-4c5a-91e2-301d8d90b7b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.240095108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995284240071467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3abda4e0-196a-4c5a-91e2-301d8d90b7b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.240892547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=462f197b-89b7-4e66-bee9-0bad788834ac name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.240948504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=462f197b-89b7-4e66-bee9-0bad788834ac name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:41:24 ha-406291 crio[679]: time="2024-06-21 18:41:24.241213550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=462f197b-89b7-4e66-bee9-0bad788834ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Running             busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      13 minutes ago      Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     14 minutes ago      Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      14 minutes ago      Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago      Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      14 minutes ago      Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      14 minutes ago      Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	[INFO] 10.244.0.4:60864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000870211s
	[INFO] 10.244.0.4:49527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014553s
	[INFO] 10.244.0.4:59987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181145s
	[INFO] 10.244.0.4:59378 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009664502s
	[INFO] 10.244.0.4:59188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181625s
	[INFO] 10.244.0.4:33100 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137671s
	[INFO] 10.244.0.4:43551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129631s
	[INFO] 10.244.0.4:59759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152418s
	[INFO] 10.244.0.4:60292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090372s
	[INFO] 10.244.0.4:47967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093215s
	[INFO] 10.244.0.4:44642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175452s
	[INFO] 10.244.0.4:49677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070108s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	[INFO] 10.244.0.4:38404 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013105268s
	[INFO] 10.244.0.4:49299 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.225770527s
	[INFO] 10.244.0.4:41342 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010990835s
	[INFO] 10.244.0.4:55838 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003903098s
	[INFO] 10.244.0.4:59078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163236s
	[INFO] 10.244.0.4:39541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147137s
	[INFO] 10.244.0.4:47420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120366s
	[INFO] 10.244.0.4:54009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000255172s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:39:39 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-406291 status is now: NodeReady
	
	
	Name:               ha-406291-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T18_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:41:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:41:10 +0000   Fri, 21 Jun 2024 18:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    ha-406291-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aeb6d6b65b246d89e229cf308cb4c9a
	  System UUID:                7aeb6d6b-65b2-46d8-9e22-9cf308cb4c9a
	  Boot ID:                    077bb108-4737-40c3-9892-3695b5a49d4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drm4v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-xrm6w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23s
	  kube-system                 kube-proxy-vknv4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)  kubelet          Node ha-406291-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22s                node-controller  Node ha-406291-m03 event: Registered Node ha-406291-m03 in Controller
	  Normal  NodeReady                14s                kubelet          Node ha-406291-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.93929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.93932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 1"}
	{"level":"info","ts":"2024-06-21T18:27:18.939332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:41:24 up 14 min,  0 users,  load average: 0.41, 0.24, 0.14
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:39:49.520764       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:49.520908       1 main.go:227] handling current node
	I0621 18:39:59.524302       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:39:59.524430       1 main.go:227] handling current node
	I0621 18:40:09.536871       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:09.536951       1 main.go:227] handling current node
	I0621 18:40:19.546045       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:19.546228       1 main.go:227] handling current node
	I0621 18:40:29.557033       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:29.557254       1 main.go:227] handling current node
	I0621 18:40:39.561036       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:39.561193       1 main.go:227] handling current node
	I0621 18:40:49.569235       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:49.569361       1 main.go:227] handling current node
	I0621 18:40:59.579375       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:40:59.579516       1 main.go:227] handling current node
	I0621 18:41:09.583520       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:41:09.583631       1 main.go:227] handling current node
	I0621 18:41:09.583661       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:41:09.583679       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:41:09.583931       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.193 Flags: [] Table: 0} 
	I0621 18:41:19.597094       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:41:19.597117       1 main.go:227] handling current node
	I0621 18:41:19.597173       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:41:19.597182       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0621 18:40:26.217258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52318: use of closed network connection
	E0621 18:40:26.646809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52394: use of closed network connection
	E0621 18:40:27.039177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52460: use of closed network connection
	E0621 18:40:29.475531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0621 18:40:29.631306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52614: use of closed network connection
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:37:24 ha-406291 kubelet[1367]: E0621 18:37:24.483671    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:37:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:37:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:38:24 ha-406291 kubelet[1367]: E0621 18:38:24.483473    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:38:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:38:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:39:24 ha-406291 kubelet[1367]: E0621 18:39:24.484210    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:39:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:39:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:40:24 ha-406291 kubelet[1367]: E0621 18:40:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:40:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:40:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:41:24 ha-406291 kubelet[1367]: E0621 18:41:24.491424    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:41:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:41:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:41:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:41:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  2m1s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6s (x2 over 15s)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (299.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 node start m02 -v=7 --alsologtostderr: exit status 80 (4m17.406823947s)

                                                
                                                
-- stdout --
	* Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	* Restarting existing kvm2 VM for "ha-406291-m02" ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:41:25.398760   35235 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:41:25.399080   35235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:41:25.399091   35235 out.go:304] Setting ErrFile to fd 2...
	I0621 18:41:25.399098   35235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:41:25.399369   35235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:41:25.399643   35235 mustload.go:65] Loading cluster: ha-406291
	I0621 18:41:25.399990   35235 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:41:25.400377   35235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:25.400435   35235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:25.416004   35235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46545
	I0621 18:41:25.416429   35235 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:25.416961   35235 main.go:141] libmachine: Using API Version  1
	I0621 18:41:25.416994   35235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:25.417346   35235 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:25.417542   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	W0621 18:41:25.419150   35235 host.go:58] "ha-406291-m02" host status: Stopped
	I0621 18:41:25.421208   35235 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:41:25.422261   35235 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:41:25.422293   35235 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:41:25.422308   35235 cache.go:56] Caching tarball of preloaded images
	I0621 18:41:25.422393   35235 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:41:25.422403   35235 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:41:25.422505   35235 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:41:25.422683   35235 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:41:25.422738   35235 start.go:364] duration metric: took 23.154µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:41:25.422752   35235 start.go:96] Skipping create...Using existing machine configuration
	I0621 18:41:25.422757   35235 fix.go:54] fixHost starting: m02
	I0621 18:41:25.423048   35235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:25.423074   35235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:25.437434   35235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0621 18:41:25.437837   35235 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:25.438264   35235 main.go:141] libmachine: Using API Version  1
	I0621 18:41:25.438278   35235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:25.438564   35235 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:25.438730   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:25.438886   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:41:25.440234   35235 fix.go:112] recreateIfNeeded on ha-406291-m02: state=Stopped err=<nil>
	I0621 18:41:25.440295   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	W0621 18:41:25.440441   35235 fix.go:138] unexpected machine state, will restart: <nil>
	I0621 18:41:25.442413   35235 out.go:177] * Restarting existing kvm2 VM for "ha-406291-m02" ...
	I0621 18:41:25.443676   35235 main.go:141] libmachine: (ha-406291-m02) Calling .Start
	I0621 18:41:25.443837   35235 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:41:25.444437   35235 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:41:25.444763   35235 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:41:25.445119   35235 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:41:25.445841   35235 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:41:26.648629   35235 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:41:26.649534   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:26.650008   35235 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:41:26.650026   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:26.650037   35235 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:41:26.650588   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:26.650607   35235 main.go:141] libmachine: (ha-406291-m02) DBG | skip adding static IP to network mk-ha-406291 - found existing host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"}
	I0621 18:41:26.650621   35235 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:41:26.650637   35235 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:41:26.650654   35235 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:41:26.653892   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:26.654349   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:26.654382   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:26.654602   35235 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:41:26.654628   35235 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:41:26.654681   35235 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:41:26.654709   35235 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:41:26.654721   35235 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:41:37.793973   35235 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:41:37.794425   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:41:37.795106   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:41:37.798128   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:37.798622   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:37.798650   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:37.798904   35235 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:41:37.799103   35235 machine.go:94] provisionDockerMachine start ...
	I0621 18:41:37.799122   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:37.799339   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:37.801643   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:37.802142   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:37.802200   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:37.802300   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:37.802497   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:37.802687   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:37.802845   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:37.803037   35235 main.go:141] libmachine: Using SSH client type: native
	I0621 18:41:37.803303   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:41:37.803374   35235 main.go:141] libmachine: About to run SSH command:
	hostname
	I0621 18:41:37.906057   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0621 18:41:37.906090   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:41:37.906330   35235 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:41:37.906355   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:41:37.906532   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:37.909848   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:37.910259   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:37.910277   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:37.910445   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:37.910640   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:37.910792   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:37.910957   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:37.911135   35235 main.go:141] libmachine: Using SSH client type: native
	I0621 18:41:37.911337   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:41:37.911350   35235 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:41:38.045519   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:41:38.045548   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:38.048010   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.048407   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:38.048430   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.048622   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:38.048815   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:38.048976   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:38.049122   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:38.049278   35235 main.go:141] libmachine: Using SSH client type: native
	I0621 18:41:38.049470   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:41:38.049495   35235 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:41:38.162214   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:41:38.162247   35235 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:41:38.162272   35235 buildroot.go:174] setting up certificates
	I0621 18:41:38.162295   35235 provision.go:84] configureAuth start
	I0621 18:41:38.162307   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:41:38.162563   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:41:38.165407   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.165831   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:38.165862   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.166002   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:38.168297   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.168630   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:38.168656   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.168861   35235 provision.go:143] copyHostCerts
	I0621 18:41:38.168886   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:41:38.168929   35235 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:41:38.168941   35235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:41:38.169002   35235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:41:38.169072   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:41:38.169093   35235 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:41:38.169100   35235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:41:38.169128   35235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:41:38.169168   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:41:38.169185   35235 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:41:38.169191   35235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:41:38.169212   35235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:41:38.169255   35235 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:41:38.339099   35235 provision.go:177] copyRemoteCerts
	I0621 18:41:38.339154   35235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:41:38.339175   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:38.342201   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.342572   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:38.342600   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.342797   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:38.342986   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:38.343175   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:38.343285   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:41:38.423299   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:41:38.423361   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:41:38.446136   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:41:38.446198   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:41:38.468132   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:41:38.468213   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:41:38.489937   35235 provision.go:87] duration metric: took 327.620634ms to configureAuth
	I0621 18:41:38.489968   35235 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:41:38.490253   35235 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:41:38.490353   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:38.492907   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.493300   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:38.493332   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.493490   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:38.493676   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:38.493857   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:38.493965   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:38.494129   35235 main.go:141] libmachine: Using SSH client type: native
	I0621 18:41:38.494335   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:41:38.494363   35235 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:41:38.745084   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:41:38.745110   35235 machine.go:97] duration metric: took 945.994237ms to provisionDockerMachine
	I0621 18:41:38.745120   35235 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:41:38.745129   35235 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:41:38.745144   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:38.745474   35235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:41:38.745498   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:38.748247   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.748661   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:38.748683   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.748882   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:38.749061   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:38.749235   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:38.749372   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:41:38.831959   35235 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:41:38.835928   35235 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:41:38.835957   35235 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:41:38.836032   35235 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:41:38.836116   35235 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:41:38.836126   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:41:38.836212   35235 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:41:38.844773   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:41:38.867692   35235 start.go:296] duration metric: took 122.557034ms for postStartSetup
	I0621 18:41:38.867740   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:38.868006   35235 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0621 18:41:38.868031   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:38.870401   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.870690   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:38.870735   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:38.870878   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:38.871049   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:38.871179   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:38.871317   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:41:38.951453   35235 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0621 18:41:38.951568   35235 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0621 18:41:39.007442   35235 fix.go:56] duration metric: took 13.584678445s for fixHost
	I0621 18:41:39.007487   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:39.010155   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:39.010551   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:39.010579   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:39.010735   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:39.010956   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:39.011091   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:39.011224   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:39.011345   35235 main.go:141] libmachine: Using SSH client type: native
	I0621 18:41:39.011500   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:41:39.011507   35235 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0621 18:41:39.114136   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718995299.079723478
	
	I0621 18:41:39.114155   35235 fix.go:216] guest clock: 1718995299.079723478
	I0621 18:41:39.114162   35235 fix.go:229] Guest: 2024-06-21 18:41:39.079723478 +0000 UTC Remote: 2024-06-21 18:41:39.007467135 +0000 UTC m=+13.642581494 (delta=72.256343ms)
	I0621 18:41:39.114178   35235 fix.go:200] guest clock delta is within tolerance: 72.256343ms
	I0621 18:41:39.114183   35235 start.go:83] releasing machines lock for "ha-406291-m02", held for 13.691435613s
	I0621 18:41:39.114199   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:39.114511   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:41:39.117074   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:39.117429   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:39.117452   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:39.117607   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:39.118097   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:39.118279   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:41:39.118373   35235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:41:39.118417   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:39.118456   35235 ssh_runner.go:195] Run: systemctl --version
	I0621 18:41:39.118475   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:41:39.121123   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:39.121389   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:39.121534   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:39.121575   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:39.121700   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:39.121810   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:39.121835   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:39.121877   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:39.121972   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:41:39.122036   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:39.122113   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:41:39.122179   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:41:39.122232   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:41:39.122333   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:41:39.234301   35235 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:41:39.382103   35235 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:41:39.387475   35235 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:41:39.387529   35235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:41:39.403693   35235 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:41:39.403716   35235 start.go:494] detecting cgroup driver to use...
	I0621 18:41:39.403769   35235 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:41:39.418555   35235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:41:39.431995   35235 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:41:39.432045   35235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:41:39.446478   35235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:41:39.459421   35235 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:41:39.576198   35235 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:41:39.736545   35235 docker.go:233] disabling docker service ...
	I0621 18:41:39.736599   35235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:41:39.753584   35235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:41:39.766092   35235 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:41:39.893661   35235 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:41:40.007219   35235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:41:40.020987   35235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:41:40.038350   35235 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:41:40.038424   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:41:40.048698   35235 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:41:40.048765   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:41:40.058773   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:41:40.069126   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:41:40.079117   35235 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:41:40.089592   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:41:40.099897   35235 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:41:40.116226   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:41:40.126293   35235 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:41:40.135067   35235 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:41:40.135110   35235 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:41:40.146762   35235 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:41:40.155796   35235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:41:40.273955   35235 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:41:40.400262   35235 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:41:40.400366   35235 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:41:40.404705   35235 start.go:562] Will wait 60s for crictl version
	I0621 18:41:40.404761   35235 ssh_runner.go:195] Run: which crictl
	I0621 18:41:40.408123   35235 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:41:40.446718   35235 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:41:40.446815   35235 ssh_runner.go:195] Run: crio --version
	I0621 18:41:40.474349   35235 ssh_runner.go:195] Run: crio --version
	I0621 18:41:40.503870   35235 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:41:40.505110   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:41:40.507792   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:40.508197   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:41:40.508224   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:41:40.508442   35235 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:41:40.512330   35235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:41:40.525820   35235 mustload.go:65] Loading cluster: ha-406291
	I0621 18:41:40.526028   35235 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:41:40.526283   35235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:40.526322   35235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:40.541570   35235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0621 18:41:40.541991   35235 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:40.542495   35235 main.go:141] libmachine: Using API Version  1
	I0621 18:41:40.542516   35235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:40.542896   35235 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:40.543056   35235 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:41:40.544520   35235 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:41:40.544793   35235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:41:40.544828   35235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:41:40.560583   35235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I0621 18:41:40.561329   35235 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:41:40.561819   35235 main.go:141] libmachine: Using API Version  1
	I0621 18:41:40.561836   35235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:41:40.562384   35235 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:41:40.562527   35235 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:41:40.562657   35235 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:41:40.562669   35235 certs.go:194] generating shared ca certs ...
	I0621 18:41:40.562685   35235 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:41:40.562843   35235 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:41:40.562890   35235 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:41:40.562900   35235 certs.go:256] generating profile certs ...
	I0621 18:41:40.562983   35235 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:41:40.563075   35235 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:41:40.563124   35235 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:41:40.563136   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:41:40.563154   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:41:40.563173   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:41:40.563187   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:41:40.563202   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:41:40.563216   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:41:40.563229   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:41:40.563255   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:41:40.563312   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:41:40.563349   35235 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:41:40.563363   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:41:40.563391   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:41:40.563417   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:41:40.563444   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:41:40.563483   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:41:40.563515   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:41:40.563530   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:41:40.563544   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:41:40.563570   35235 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:41:40.566480   35235 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:40.566890   35235 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:41:40.566916   35235 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:41:40.567069   35235 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:41:40.567250   35235 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:41:40.567419   35235 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:41:40.567600   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:41:40.634158   35235 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0621 18:41:40.639816   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:41:40.650182   35235 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0621 18:41:40.653818   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:41:40.663383   35235 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:41:40.667143   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:41:40.676946   35235 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:41:40.681196   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:41:40.692687   35235 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:41:40.696635   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:41:40.706519   35235 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0621 18:41:40.710548   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:41:40.721073   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:41:40.744850   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:41:40.770255   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:41:40.793885   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:41:40.818855   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:41:40.842932   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:41:40.864560   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:41:40.887186   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:41:40.908943   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:41:40.930236   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:41:40.952389   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:41:40.973993   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:41:40.989089   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:41:41.004282   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:41:41.019635   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:41:41.040987   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:41:41.058089   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:41:41.073644   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:41:41.090884   35235 ssh_runner.go:195] Run: openssl version
	I0621 18:41:41.096367   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:41:41.107820   35235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:41:41.111708   35235 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:41:41.111759   35235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:41:41.116944   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:41:41.126635   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:41:41.136550   35235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:41:41.140357   35235 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:41:41.140410   35235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:41:41.145589   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:41:41.155902   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:41:41.166054   35235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:41:41.170212   35235 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:41:41.170271   35235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:41:41.175431   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:41:41.186311   35235 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:41:41.190080   35235 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:41:41.190134   35235 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:41:41.190237   35235 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:41:41.190265   35235 kube-vip.go:115] generating kube-vip config ...
	I0621 18:41:41.190293   35235 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:41:41.205204   35235 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:41:41.205325   35235 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:41:41.205385   35235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:41:41.216597   35235 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubelet
	Initiating transfer...
	I0621 18:41:41.216648   35235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:41:41.225943   35235 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0621 18:41:41.225951   35235 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0621 18:41:41.225944   35235 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:41:41.225984   35235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:41:41.225996   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:41:41.225972   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:41:41.226088   35235 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:41:41.226158   35235 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:41:41.239628   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0621 18:41:41.239672   35235 ssh_runner.go:356] copy: skipping /var/lib/minikube/binaries/v1.30.2/kubectl (exists)
	I0621 18:41:41.239707   35235 ssh_runner.go:356] copy: skipping /var/lib/minikube/binaries/v1.30.2/kubeadm (exists)
	I0621 18:41:41.239727   35235 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0621 18:41:41.243410   35235 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0621 18:41:41.243438   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0621 18:41:41.698257   35235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0621 18:41:41.707723   35235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0621 18:41:41.724639   35235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:41:41.743101   35235 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0621 18:41:41.760944   35235 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:41:41.764890   35235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:41:41.776520   35235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:41:41.886856   35235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:41:41.903133   35235 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:41:41.903253   35235 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:41:41.903431   35235 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:41:41.905550   35235 out.go:177] * Enabled addons: 
	I0621 18:41:41.905576   35235 out.go:177] * Verifying Kubernetes components...
	I0621 18:41:41.906878   35235 addons.go:510] duration metric: took 3.645796ms for enable addons: enabled=[]
	I0621 18:41:41.907017   35235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:41:42.037375   35235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:41:42.751725   35235 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:41:42.752154   35235 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0621 18:41:42.752273   35235 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.198:8443
	I0621 18:41:42.752738   35235 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:41:42.752934   35235 node_ready.go:35] waiting up to 6m0s for node "ha-406291-m02" to be "Ready" ...
	I0621 18:41:42.753014   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:42.753026   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:42.753035   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:42.753044   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:42.761710   35235 round_trippers.go:574] Response Status: 404 Not Found in 8 milliseconds
	I0621 18:41:43.253361   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:43.253384   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:43.253392   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:43.253397   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:43.255457   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:43.753171   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:43.753205   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:43.753214   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:43.753218   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:43.755464   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:44.253985   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:44.254017   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:44.254028   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:44.254033   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:44.256556   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:44.753160   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:44.753190   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:44.753199   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:44.753207   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:44.755509   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:44.755615   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:41:45.253212   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:45.253234   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:45.253242   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:45.253245   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:45.255313   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:45.753311   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:45.753333   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:45.753340   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:45.753344   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:45.756039   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:46.253908   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:46.253976   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:46.253991   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:46.253997   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:46.259190   35235 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0621 18:41:46.753761   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:46.753783   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:46.753791   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:46.753808   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:46.756233   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:46.756359   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:41:47.254003   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:47.254033   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:47.254044   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:47.254050   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:47.256388   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:47.754169   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:47.754190   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:47.754198   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:47.754203   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:47.756582   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:48.253249   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:48.253269   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:48.253276   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:48.253282   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:48.255342   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:48.754157   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:48.754184   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:48.754195   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:48.754201   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:48.757057   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:48.757168   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:41:49.253926   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:49.253948   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:49.253955   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:49.253959   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:49.256216   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:49.753944   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:49.753976   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:49.753985   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:49.753989   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:49.756138   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:50.253937   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:50.253959   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:50.253967   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:50.253973   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:50.256423   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:50.753631   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:50.753672   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:50.753680   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:50.753684   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:50.755842   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:51.253486   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:51.253508   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:51.253516   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:51.253520   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:51.255914   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:51.256023   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:41:51.753640   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:51.753668   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:51.753679   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:51.753687   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:51.756525   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:52.253151   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:52.253175   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:52.253185   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:52.253191   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:52.255417   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:52.753719   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:52.753744   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:52.753752   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:52.753756   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:52.756141   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:53.253574   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:53.253594   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:53.253601   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:53.253606   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:53.255928   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:53.256044   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:41:53.753598   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:53.753618   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:53.753626   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:53.753630   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:53.756090   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:54.253878   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:54.253900   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:54.253908   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:54.253911   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:54.256262   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:54.753183   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:54.753215   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:54.753225   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:54.753229   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:54.757306   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0621 18:41:55.254087   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:55.254109   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:55.254116   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:55.254120   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:55.256290   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:55.256391   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:41:55.753269   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:55.753293   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:55.753300   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:55.753304   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:55.755737   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:56.253447   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:56.253496   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:56.253507   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:56.253513   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:56.255797   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:56.753462   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:56.753489   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:56.753498   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:56.753509   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:56.755610   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:57.253266   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:57.253286   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:57.253293   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:57.253302   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:57.255333   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:57.754092   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:57.754113   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:57.754121   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:57.754125   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:57.756587   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:57.756713   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:41:58.253252   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:58.253277   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:58.253293   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:58.253299   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:58.255468   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:58.753160   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:58.753184   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:58.753192   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:58.753195   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:58.755547   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:59.253241   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:59.253276   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:59.253287   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:59.253291   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:59.255669   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:41:59.753367   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:41:59.753392   35235 round_trippers.go:469] Request Headers:
	I0621 18:41:59.753401   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:41:59.753407   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:41:59.755615   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:00.253267   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:00.253399   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:00.253557   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:00.253571   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:00.256856   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:42:00.256949   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:00.753594   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:00.753633   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:00.753643   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:00.753647   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:00.756443   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:01.253121   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:01.253143   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:01.253150   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:01.253156   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:01.255464   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:01.753187   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:01.753225   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:01.753238   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:01.753244   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:01.755643   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:02.253356   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:02.253378   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:02.253387   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:02.253391   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:02.256121   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:02.753904   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:02.753934   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:02.753942   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:02.753947   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:02.756015   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:02.756101   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:03.253925   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:03.253959   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:03.253970   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:03.253974   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:03.256199   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:03.753971   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:03.753997   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:03.754007   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:03.754012   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:03.756158   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:04.253963   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:04.253985   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:04.253993   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:04.253997   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:04.256107   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:04.753868   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:04.753891   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:04.753899   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:04.753902   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:04.758115   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0621 18:42:04.758305   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:05.253485   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:05.253509   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:05.253516   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:05.253521   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:05.255980   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:05.754131   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:05.754153   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:05.754161   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:05.754166   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:05.756385   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:06.253134   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:06.253163   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:06.253171   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:06.253176   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:06.255582   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:06.753260   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:06.753301   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:06.753310   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:06.753316   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:06.755505   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:07.253240   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:07.253276   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:07.253288   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:07.253293   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:07.255461   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:07.255567   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:07.753165   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:07.753186   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:07.753193   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:07.753197   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:07.755449   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:08.253180   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:08.253203   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:08.253210   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:08.253214   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:08.255478   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:08.753122   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:08.753144   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:08.753150   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:08.753154   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:08.755775   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:09.253414   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:09.253446   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:09.253454   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:09.253458   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:09.255954   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:09.256045   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:09.753642   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:09.753670   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:09.753681   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:09.753686   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:09.756626   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:10.253354   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:10.253383   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:10.253392   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:10.253398   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:10.255677   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:10.753063   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:10.753086   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:10.753093   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:10.753097   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:10.755029   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0621 18:42:11.253774   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:11.253825   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:11.253838   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:11.253843   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:11.256408   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:11.256528   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:11.754151   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:11.754171   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:11.754179   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:11.754182   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:11.756541   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:12.253205   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:12.253229   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:12.253237   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:12.253244   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:12.257722   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0621 18:42:12.753388   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:12.753417   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:12.753429   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:12.753436   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:12.755570   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:13.253250   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:13.253273   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:13.253281   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:13.253285   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:13.255704   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:13.753395   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:13.753423   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:13.753431   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:13.753436   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:13.756058   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:13.756196   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:14.253863   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:14.253887   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:14.253894   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:14.253899   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:14.256504   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:14.753198   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:14.753219   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:14.753227   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:14.753231   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:14.756110   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:15.253908   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:15.253953   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:15.253961   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:15.253966   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:15.256153   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:15.753330   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:15.753361   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:15.753373   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:15.753379   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:15.756028   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:16.253789   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:16.253837   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:16.253848   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:16.253854   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:16.256302   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:16.256407   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:16.754028   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:16.754060   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:16.754068   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:16.754074   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:16.756338   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:17.254142   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:17.254167   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:17.254179   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:17.254186   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:17.257058   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:17.753820   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:17.753845   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:17.753854   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:17.753859   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:17.756211   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:18.253941   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:18.253967   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:18.253979   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:18.253984   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:18.256278   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:18.754069   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:18.754094   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:18.754104   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:18.754111   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:18.757002   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:18.757131   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:19.253739   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:19.253762   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:19.253769   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:19.253778   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:19.256223   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:19.754025   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:19.754049   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:19.754058   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:19.754063   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:19.756690   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:20.253368   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:20.253390   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:20.253403   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:20.253407   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:20.256257   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:20.754183   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:20.754206   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:20.754216   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:20.754224   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:20.756539   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:21.253199   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:21.253220   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:21.253228   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:21.253233   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:21.255840   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:21.255936   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:21.753575   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:21.753603   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:21.753613   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:21.753619   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:21.755746   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:22.253402   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:22.253424   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:22.253431   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:22.253436   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:22.256162   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:22.753987   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:22.754007   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:22.754014   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:22.754021   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:22.756609   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:23.253300   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:23.253325   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:23.253333   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:23.253338   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:23.256293   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:23.256396   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:23.754045   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:23.754067   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:23.754075   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:23.754078   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:23.756374   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:24.254184   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:24.254207   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:24.254216   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:24.254220   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:24.256646   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:24.753347   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:24.753373   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:24.753385   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:24.753392   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:24.757869   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0621 18:42:25.253523   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:25.253546   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:25.253553   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:25.253557   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:25.255919   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:25.754162   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:25.754188   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:25.754199   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:25.754205   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:25.757204   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:25.757300   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:26.253996   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:26.254023   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:26.254034   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:26.254039   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:26.256738   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:26.753420   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:26.753443   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:26.753450   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:26.753455   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:26.755671   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:27.253339   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:27.253364   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:27.253371   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:27.253375   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:27.256205   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:27.753997   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:27.754021   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:27.754026   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:27.754030   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:27.756311   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:28.254096   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:28.254119   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:28.254129   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:28.254136   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:28.256400   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:28.256508   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:28.753114   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:28.753142   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:28.753149   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:28.753152   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:28.755794   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:29.253467   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:29.253506   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:29.253515   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:29.253520   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:29.255937   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:29.753230   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:29.753253   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:29.753261   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:29.753264   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:29.755510   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:30.253160   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:30.253188   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:30.253199   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:30.253204   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:30.255843   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:30.753685   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:30.753706   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:30.753714   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:30.753718   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:30.756184   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:30.756306   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:31.253930   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:31.253958   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:31.253966   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:31.253970   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:31.256331   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:31.754108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:31.754136   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:31.754147   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:31.754153   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:31.756842   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:32.253126   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:32.253145   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:32.253153   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:32.253157   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:32.255626   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:32.753394   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:32.753423   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:32.753436   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:32.753441   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:32.759766   35235 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0621 18:42:32.759867   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:33.253454   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:33.253477   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:33.253486   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:33.253493   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:33.256193   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:33.753896   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:33.753930   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:33.753937   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:33.753940   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:33.756411   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:34.253071   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:34.253105   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:34.253113   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:34.253116   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:34.255378   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:34.754073   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:34.754104   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:34.754112   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:34.754117   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:34.756791   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:35.253138   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:35.253166   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:35.253176   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:35.253181   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:35.255680   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:35.255791   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:35.753769   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:35.753793   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:35.753821   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:35.753828   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:35.756205   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:36.253942   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:36.253972   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:36.253985   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:36.253990   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:36.256241   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:36.753958   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:36.753982   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:36.754006   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:36.754013   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:36.756337   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:37.254108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:37.254134   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:37.254148   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:37.254152   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:37.256697   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:37.256821   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:37.753346   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:37.753370   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:37.753378   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:37.753383   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:37.755503   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:38.253147   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:38.253172   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:38.253182   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:38.253186   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:38.256886   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:42:38.753274   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:38.753305   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:38.753315   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:38.753322   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:38.755756   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:39.253414   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:39.253441   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:39.253449   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:39.253454   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:39.256586   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:42:39.753328   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:39.753366   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:39.753374   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:39.753380   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:39.755869   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:39.755974   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:40.253555   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:40.253577   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:40.253585   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:40.253589   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:40.255802   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:40.753689   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:40.753711   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:40.753720   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:40.753724   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:40.756155   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:41.253945   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:41.253969   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:41.253978   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:41.253984   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:41.256566   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:41.753259   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:41.753284   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:41.753292   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:41.753296   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:41.756013   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:41.756172   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:42.253766   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:42.253789   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:42.253805   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:42.253811   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:42.256327   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:42.753105   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:42.753127   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:42.753137   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:42.753141   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:42.755495   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:43.253158   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:43.253179   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:43.253187   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:43.253192   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:43.255316   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:43.754058   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:43.754079   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:43.754087   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:43.754090   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:43.756779   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:43.756888   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:44.253472   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:44.253494   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:44.253503   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:44.253506   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:44.256311   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:44.754068   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:44.754088   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:44.754095   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:44.754099   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:44.756462   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:45.253132   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:45.253163   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:45.253173   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:45.253177   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:45.255775   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:45.753992   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:45.754022   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:45.754033   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:45.754039   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:45.756508   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:46.253201   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:46.253222   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:46.253228   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:46.253233   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:46.255332   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:46.255455   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:46.754119   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:46.754140   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:46.754147   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:46.754150   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:46.757068   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:47.253888   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:47.253912   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:47.253921   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:47.253930   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:47.256903   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:47.753583   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:47.753605   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:47.753611   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:47.753615   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:47.756074   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:48.253811   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:48.253833   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:48.253844   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:48.253850   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:48.256655   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:48.256749   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:48.753312   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:48.753336   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:48.753345   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:48.753349   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:48.755629   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:49.253237   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:49.253260   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:49.253270   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:49.253274   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:49.255503   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:49.753184   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:49.753205   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:49.753213   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:49.753218   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:49.756006   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:50.253818   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:50.253844   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:50.253856   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:50.253862   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:50.256953   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:42:50.257059   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:50.754033   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:50.754054   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:50.754062   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:50.754066   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:50.756622   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:51.253295   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:51.253316   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:51.253324   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:51.253327   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:51.255813   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:51.753510   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:51.753533   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:51.753541   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:51.753544   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:51.755825   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:52.253506   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:52.253528   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:52.253535   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:52.253539   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:52.255863   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:52.753660   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:52.753681   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:52.753688   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:52.753692   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:52.756168   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:52.756259   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:53.253472   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:53.253494   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:53.253503   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:53.253511   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:53.256126   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:53.753943   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:53.753965   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:53.753972   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:53.753976   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:53.756180   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:54.253977   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:54.254000   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:54.254008   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:54.254011   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:54.257279   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:42:54.753658   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:54.753688   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:54.753698   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:54.753704   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:54.756429   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:54.756533   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:55.253133   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:55.253154   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:55.253162   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:55.253166   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:55.255548   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:55.753272   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:55.753294   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:55.753301   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:55.753306   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:55.755515   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:56.253219   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:56.253239   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:56.253246   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:56.253252   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:56.255877   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:56.753551   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:56.753574   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:56.753581   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:56.753585   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:56.756745   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:42:56.756925   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:57.253505   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:57.253529   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:57.253541   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:57.253548   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:57.255986   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:57.753791   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:57.753842   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:57.753852   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:57.753856   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:57.757122   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:42:58.253959   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:58.253982   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:58.253990   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:58.253995   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:58.256342   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:58.754111   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:58.754137   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:58.754145   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:58.754148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:58.756826   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:59.253496   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:59.253517   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:59.253525   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:59.253528   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:59.255815   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:42:59.255919   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:42:59.753196   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:42:59.753218   35235 round_trippers.go:469] Request Headers:
	I0621 18:42:59.753225   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:42:59.753228   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:42:59.756927   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:43:00.253645   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:00.253673   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:00.253682   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:00.253685   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:00.256727   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:43:00.753832   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:00.753860   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:00.753871   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:00.753877   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:00.757381   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:43:01.254063   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:01.254085   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:01.254092   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:01.254097   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:01.256220   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:01.256318   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:01.753941   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:01.753973   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:01.753985   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:01.753990   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:01.756534   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:02.253243   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:02.253273   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:02.253281   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:02.253284   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:02.255769   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:02.753560   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:02.753584   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:02.753591   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:02.753596   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:02.756335   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:03.254108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:03.254137   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:03.254145   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:03.254148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:03.256538   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:03.256640   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:03.753199   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:03.753251   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:03.753265   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:03.753272   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:03.755656   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:04.253292   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:04.253312   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:04.253320   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:04.253324   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:04.255471   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:04.753157   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:04.753179   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:04.753186   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:04.753191   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:04.755591   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:05.253259   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:05.253280   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:05.253287   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:05.253292   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:05.256074   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:05.753086   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:05.753109   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:05.753116   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:05.753120   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:05.755731   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:05.755839   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:06.253429   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:06.253464   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:06.253472   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:06.253476   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:06.255749   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:06.753405   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:06.753451   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:06.753458   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:06.753462   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:06.756151   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:07.253952   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:07.253973   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:07.253981   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:07.253983   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:07.256319   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:07.754096   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:07.754123   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:07.754138   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:07.754148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:07.757338   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:43:07.757461   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:08.254099   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:08.254121   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:08.254129   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:08.254133   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:08.256774   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:08.753440   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:08.753462   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:08.753469   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:08.753474   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:08.756358   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:09.254096   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:09.254117   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:09.254125   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:09.254129   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:09.256429   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:09.753127   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:09.753150   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:09.753161   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:09.753167   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:09.755586   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:10.253272   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:10.253294   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:10.253302   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:10.253306   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:10.255631   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:10.255739   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:10.753668   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:10.753696   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:10.753706   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:10.753713   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:10.756201   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:11.253962   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:11.253985   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:11.253993   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:11.253997   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:11.256834   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:11.753498   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:11.753530   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:11.753538   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:11.753541   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:11.756002   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:12.253852   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:12.253878   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:12.253889   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:12.253894   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:12.255623   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0621 18:43:12.753348   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:12.753368   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:12.753376   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:12.753380   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:12.756773   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:43:12.756924   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:13.253240   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:13.253269   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:13.253279   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:13.253283   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:13.255681   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:13.753478   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:13.753514   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:13.753525   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:13.753529   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:13.755934   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:14.253664   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:14.253691   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:14.253702   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:14.253708   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:14.255944   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:14.753658   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:14.753690   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:14.753701   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:14.753708   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:14.756145   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:15.253911   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:15.253939   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:15.253950   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:15.253955   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:15.256242   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:15.256332   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:15.753142   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:15.753171   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:15.753192   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:15.753198   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:15.755492   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:16.253211   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:16.253233   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:16.253241   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:16.253245   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:16.255511   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:16.753200   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:16.753230   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:16.753241   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:16.753247   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:16.755576   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:17.253273   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:17.253302   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:17.253311   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:17.253318   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:17.255913   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:17.753621   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:17.753649   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:17.753659   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:17.753663   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:17.756926   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:43:17.757048   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:18.253566   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:18.253589   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:18.253597   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:18.253602   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:18.255644   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:18.753408   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:18.753435   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:18.753446   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:18.753454   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:18.756037   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:19.253726   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:19.253747   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:19.253754   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:19.253757   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:19.255901   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:19.753588   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:19.753610   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:19.753618   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:19.753625   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:19.756088   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:20.253881   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:20.253910   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:20.253924   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:20.253953   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:20.256596   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:20.256721   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:20.753382   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:20.753404   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:20.753413   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:20.753418   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:20.756358   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:21.254088   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:21.254110   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:21.254121   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:21.254126   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:21.256303   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:21.754081   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:21.754108   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:21.754124   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:21.754131   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:21.757208   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:43:22.253974   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:22.254000   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:22.254012   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:22.254018   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:22.256304   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:22.754129   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:22.754151   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:22.754163   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:22.754169   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:22.756500   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:22.756606   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:23.253946   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:23.253971   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:23.253982   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:23.253987   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:23.256653   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:23.753315   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:23.753339   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:23.753351   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:23.753356   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:23.755944   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:24.253606   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:24.253631   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:24.253642   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:24.253648   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:24.256093   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:24.753882   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:24.753906   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:24.753917   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:24.753925   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:24.756558   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:24.756656   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:25.253213   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:25.253248   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:25.253270   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:25.253277   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:25.255472   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:25.753250   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:25.753272   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:25.753279   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:25.753282   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:25.755573   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:26.253253   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:26.253279   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:26.253287   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:26.253293   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:26.256024   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:26.753826   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:26.753845   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:26.753854   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:26.753858   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:26.755913   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:27.253577   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:27.253603   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:27.253612   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:27.253616   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:27.256165   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:27.256300   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:27.753976   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:27.754001   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:27.754010   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:27.754014   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:27.756115   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:28.253925   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:28.253947   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:28.253955   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:28.253965   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:28.256436   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:28.753133   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:28.753157   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:28.753165   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:28.753170   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:28.755397   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:29.253099   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:29.253122   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:29.253129   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:29.253135   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:29.256178   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:43:29.753984   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:29.754006   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:29.754022   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:29.754026   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:29.755897   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0621 18:43:29.756008   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:30.254099   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:30.254122   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:30.254130   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:30.254134   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:30.256362   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:30.754136   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:30.754157   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:30.754165   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:30.754170   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:30.756422   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:31.254116   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:31.254138   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:31.254146   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:31.254150   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:31.256221   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:31.753960   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:31.753982   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:31.753990   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:31.753995   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:31.756200   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:31.756313   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:32.253983   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:32.254005   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:32.254013   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:32.254017   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:32.256078   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:32.753997   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:32.754018   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:32.754028   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:32.754035   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:32.756287   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:33.254048   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:33.254069   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:33.254076   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:33.254079   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:33.256373   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:33.754131   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:33.754156   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:33.754164   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:33.754171   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:33.756488   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:33.756588   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:34.253166   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:34.253190   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:34.253199   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:34.253203   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:34.255400   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:34.753125   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:34.753154   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:34.753163   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:34.753168   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:34.755457   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:35.253151   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:35.253178   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:35.253187   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:35.253191   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:35.256046   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:35.754110   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:35.754165   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:35.754179   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:35.754185   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:35.756571   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:35.756693   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:36.253235   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:36.253260   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:36.253270   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:36.253276   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:36.255776   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:36.753435   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:36.753457   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:36.753469   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:36.753478   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:36.755864   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:37.253530   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:37.253558   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:37.253569   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:37.253575   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:37.255768   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:37.753419   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:37.753447   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:37.753458   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:37.753463   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:37.755842   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:38.253318   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:38.253343   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:38.253355   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:38.253362   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:38.255755   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:38.255877   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:38.753477   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:38.753504   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:38.753512   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:38.753517   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:38.755767   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:39.253430   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:39.253450   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:39.253457   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:39.253463   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:39.255589   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:39.753233   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:39.753260   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:39.753270   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:39.753276   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:39.755668   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:40.253355   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:40.253390   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:40.253401   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:40.253406   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:40.255839   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:40.255989   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:40.753697   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:40.753717   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:40.753724   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:40.753727   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:40.756179   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:41.253951   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:41.253973   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:41.253981   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:41.253986   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:41.256552   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:41.753265   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:41.753288   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:41.753296   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:41.753303   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:41.755598   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:42.253276   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:42.253300   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:42.253308   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:42.253312   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:42.255651   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:42.753497   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:42.753521   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:42.753530   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:42.753535   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:42.756468   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:42.756599   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:43.253154   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:43.253180   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:43.253190   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:43.253195   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:43.255537   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:43.753238   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:43.753265   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:43.753277   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:43.753282   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:43.755936   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:44.253576   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:44.253596   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:44.253602   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:44.253605   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:44.255821   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:44.753231   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:44.753254   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:44.753261   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:44.753267   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:44.755628   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:45.253355   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:45.253388   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:45.253398   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:45.253403   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:45.255498   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:45.255599   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:45.753559   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:45.753581   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:45.753588   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:45.753592   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:45.755971   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:46.253637   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:46.253659   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:46.253667   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:46.253670   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:46.255870   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:46.753524   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:46.753546   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:46.753553   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:46.753558   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:46.755816   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:47.253503   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:47.253527   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:47.253535   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:47.253539   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:47.255982   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:47.256080   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:47.753719   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:47.753741   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:47.753747   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:47.753751   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:47.756084   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:48.253863   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:48.253882   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:48.253890   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:48.253895   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:48.256321   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:48.754097   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:48.754125   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:48.754133   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:48.754137   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:48.756772   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:49.253414   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:49.253435   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:49.253443   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:49.253447   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:49.256024   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:49.256114   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:49.753782   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:49.753817   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:49.753826   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:49.753830   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:49.756294   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:50.254038   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:50.254062   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:50.254071   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:50.254079   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:50.256503   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:50.753420   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:50.753444   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:50.753456   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:50.753461   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:50.755767   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:51.253472   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:51.253498   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:51.253504   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:51.253508   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:51.255753   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:51.754121   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:51.754148   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:51.754160   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:51.754169   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:51.756676   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:51.756799   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:52.253316   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:52.253345   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:52.253355   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:52.253362   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:52.255773   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:52.753500   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:52.753535   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:52.753543   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:52.753547   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:52.755866   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:53.253575   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:53.253595   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:53.253603   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:53.253606   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:53.255800   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:53.753469   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:53.753497   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:53.753507   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:53.753512   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:53.755769   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:54.253422   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:54.253443   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:54.253451   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:54.253454   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:54.255615   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:54.255730   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:54.753348   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:54.753371   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:54.753379   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:54.753384   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:54.756006   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:55.253765   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:55.253816   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:55.253832   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:55.253837   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:55.256102   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:55.753196   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:55.753224   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:55.753235   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:55.753240   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:55.755510   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:56.253249   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:56.253281   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:56.253295   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:56.253303   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:56.256169   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:56.256296   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:56.753979   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:56.753999   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:56.754006   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:56.754011   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:56.756516   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:57.253169   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:57.253189   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:57.253196   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:57.253202   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:57.255709   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:57.753378   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:57.753400   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:57.753407   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:57.753411   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:57.756612   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:43:58.253258   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:58.253290   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:58.253296   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:58.253299   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:58.255806   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:58.753454   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:58.753477   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:58.753485   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:58.753493   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:58.755850   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:58.755983   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:43:59.253493   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:59.253514   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:59.253522   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:59.253525   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:59.255828   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:43:59.753511   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:43:59.753533   35235 round_trippers.go:469] Request Headers:
	I0621 18:43:59.753541   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:43:59.753544   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:43:59.755791   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:00.253485   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:00.253513   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:00.253521   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:00.253526   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:00.256129   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:00.753738   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:00.753764   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:00.753772   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:00.753776   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:00.756401   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:00.756523   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:01.254111   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:01.254135   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:01.254143   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:01.254147   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:01.256190   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:01.754048   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:01.754075   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:01.754082   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:01.754086   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:01.756494   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:02.253216   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:02.253241   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:02.253252   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:02.253260   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:02.255453   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:02.753135   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:02.753158   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:02.753166   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:02.753171   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:02.755390   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:03.254113   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:03.254136   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:03.254144   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:03.254148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:03.256256   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:03.256480   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:03.754075   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:03.754100   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:03.754111   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:03.754118   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:03.756529   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:04.253240   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:04.253263   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:04.253270   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:04.253275   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:04.255430   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:04.753150   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:04.753176   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:04.753189   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:04.753195   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:04.755405   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:05.253064   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:05.253119   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:05.253131   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:05.253136   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:05.256296   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:44:05.256529   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:05.753351   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:05.753374   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:05.753383   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:05.753387   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:05.755749   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:06.253439   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:06.253462   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:06.253474   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:06.253479   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:06.256427   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:06.753144   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:06.753166   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:06.753177   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:06.753183   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:06.755697   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:07.253393   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:07.253418   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:07.253428   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:07.253434   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:07.255527   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:07.753211   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:07.753240   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:07.753248   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:07.753251   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:07.755438   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:07.755544   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:08.253146   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:08.253169   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:08.253180   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:08.253186   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:08.257404   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0621 18:44:08.754150   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:08.754174   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:08.754185   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:08.754190   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:08.756461   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:09.253177   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:09.253205   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:09.253212   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:09.253217   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:09.255685   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:09.753345   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:09.753363   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:09.753374   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:09.753381   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:09.755703   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:09.755818   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:10.253233   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:10.253256   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:10.253264   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:10.253268   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:10.255460   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:10.753414   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:10.753434   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:10.753441   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:10.753446   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:10.756028   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:11.253831   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:11.253855   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:11.253864   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:11.253868   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:11.256408   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:11.753114   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:11.753141   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:11.753151   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:11.753155   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:11.755503   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:12.253193   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:12.253220   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:12.253228   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:12.253232   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:12.255424   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:12.255520   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:12.753933   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:12.753952   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:12.753965   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:12.753969   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:12.756711   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:13.253381   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:13.253409   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:13.253416   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:13.253422   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:13.256041   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:13.753786   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:13.753822   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:13.753833   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:13.753837   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:13.755942   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:14.253592   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:14.253615   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:14.253622   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:14.253626   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:14.256403   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:14.256498   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:14.753105   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:14.753127   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:14.753135   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:14.753138   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:14.755470   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:15.253125   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:15.253146   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:15.253153   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:15.253157   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:15.255470   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:15.753440   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:15.753464   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:15.753474   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:15.753479   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:15.757073   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:44:16.253853   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:16.253872   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:16.253880   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:16.253884   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:16.256131   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:16.753972   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:16.753996   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:16.754003   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:16.754006   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:16.756320   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:16.756430   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:17.254075   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:17.254102   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:17.254111   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:17.254114   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:17.256665   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:17.753372   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:17.753403   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:17.753414   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:17.753418   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:17.755677   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:18.253370   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:18.253394   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:18.253401   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:18.253407   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:18.255899   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:18.753459   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:18.753481   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:18.753489   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:18.753493   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:18.756430   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:18.756533   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:19.253235   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:19.253264   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:19.253275   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:19.253281   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:19.255426   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:19.753102   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:19.753128   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:19.753142   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:19.753146   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:19.755881   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:20.253619   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:20.253653   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:20.253664   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:20.253672   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:20.255868   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:20.753704   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:20.753726   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:20.753733   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:20.753737   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:20.756139   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:21.253737   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:21.253760   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:21.253766   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:21.253770   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:21.255914   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:21.256015   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:21.753638   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:21.753666   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:21.753677   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:21.753683   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:21.756099   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:22.254063   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:22.254084   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:22.254093   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:22.254099   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:22.256675   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:22.753446   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:22.753490   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:22.753498   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:22.753521   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:22.755891   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:23.253572   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:23.253594   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:23.253602   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:23.253607   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:23.255945   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:23.256068   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:23.753627   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:23.753649   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:23.753657   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:23.753660   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:23.756098   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:24.253879   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:24.253903   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:24.253928   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:24.253933   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:24.255876   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0621 18:44:24.753548   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:24.753569   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:24.753578   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:24.753583   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:24.756004   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:25.253845   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:25.253870   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:25.253878   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:25.253881   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:25.256229   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:25.256332   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:25.753201   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:25.753222   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:25.753230   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:25.753235   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:25.755778   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:26.253532   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:26.253560   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:26.253572   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:26.253579   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:26.256011   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:26.753500   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:26.753525   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:26.753537   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:26.753542   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:26.755797   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:27.253471   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:27.253497   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:27.253505   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:27.253511   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:27.255826   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:27.753539   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:27.753565   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:27.753575   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:27.753579   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:27.756102   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:27.756216   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:28.253894   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:28.253920   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:28.253932   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:28.253938   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:28.256388   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:28.753678   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:28.753709   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:28.753718   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:28.753722   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:28.756027   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:29.253758   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:29.253784   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:29.253793   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:29.253814   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:29.256028   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:29.753737   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:29.753760   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:29.753768   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:29.753771   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:29.756179   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:29.756294   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:30.253915   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:30.253942   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:30.253957   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:30.253962   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:30.256414   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:30.753479   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:30.753500   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:30.753509   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:30.753515   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:30.756407   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:31.254125   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:31.254147   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:31.254156   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:31.254160   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:31.256213   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:31.753958   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:31.753983   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:31.753991   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:31.753997   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:31.756682   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:31.756791   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:32.253389   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:32.253412   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:32.253423   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:32.253427   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:32.256484   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:44:32.753165   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:32.753190   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:32.753202   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:32.753209   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:32.755553   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:33.253228   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:33.253249   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:33.253264   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:33.253271   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:33.255694   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:33.753130   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:33.753157   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:33.753166   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:33.753174   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:33.755727   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:34.253411   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:34.253435   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:34.253442   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:34.253447   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:34.255741   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:34.255854   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:34.753417   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:34.753442   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:34.753454   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:34.753459   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:34.756164   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:35.253746   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:35.253769   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:35.253781   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:35.253785   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:35.255949   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:35.753180   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:35.753204   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:35.753220   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:35.753224   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:35.755860   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:36.253496   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:36.253537   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:36.253544   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:36.253548   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:36.255722   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:36.753441   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:36.753466   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:36.753477   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:36.753481   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:36.756306   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:36.756401   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:37.254079   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:37.254100   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:37.254107   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:37.254110   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:37.256481   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:37.753199   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:37.753234   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:37.753242   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:37.753246   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:37.755800   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:38.253519   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:38.253548   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:38.253559   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:38.253567   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:38.256131   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:38.753661   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:38.753683   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:38.753691   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:38.753696   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:38.756247   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:39.254003   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:39.254027   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:39.254034   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:39.254037   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:39.256345   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:39.256439   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:39.754061   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:39.754081   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:39.754089   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:39.754092   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:39.756926   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:40.253621   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:40.253650   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:40.253660   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:40.253664   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:40.255986   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:40.754015   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:40.754041   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:40.754052   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:40.754060   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:40.756357   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:41.253792   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:41.253822   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:41.253830   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:41.253835   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:41.256450   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:41.256576   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:41.753156   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:41.753181   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:41.753189   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:41.753192   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:41.755721   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:42.253422   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:42.253448   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:42.253456   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:42.253461   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:42.255626   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:42.753398   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:42.753419   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:42.753428   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:42.753432   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:42.756145   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:43.253928   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:43.253955   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:43.253967   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:43.253971   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:43.256730   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:43.256834   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:43.753403   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:43.753426   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:43.753433   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:43.753437   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:43.755806   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:44.253486   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:44.253510   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:44.253518   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:44.253523   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:44.256005   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:44.753773   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:44.753822   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:44.753832   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:44.753839   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:44.756148   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:45.253938   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:45.253965   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:45.253978   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:45.253983   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:45.256332   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:45.753319   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:45.753343   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:45.753351   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:45.753355   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:45.755917   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:45.756046   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:46.253601   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:46.253622   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:46.253634   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:46.253638   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:46.256124   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:46.753892   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:46.753915   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:46.753923   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:46.753926   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:46.756405   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:47.254133   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:47.254159   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:47.254183   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:47.254190   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:47.256769   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:47.753417   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:47.753450   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:47.753458   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:47.753463   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:47.755930   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:48.253628   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:48.253651   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:48.253658   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:48.253663   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:48.255838   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:48.255931   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:48.753538   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:48.753563   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:48.753574   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:48.753580   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:48.756631   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:44:49.253251   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:49.253275   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:49.253306   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:49.253313   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:49.256044   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:49.753793   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:49.753839   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:49.753849   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:49.753855   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:49.756074   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:50.253898   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:50.253922   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:50.253932   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:50.253936   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:50.256569   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:50.256731   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:50.753704   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:50.753732   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:50.753742   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:50.753748   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:50.756051   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:51.253856   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:51.253881   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:51.253889   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:51.253893   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:51.256213   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:51.754024   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:51.754046   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:51.754054   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:51.754057   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:51.756688   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:52.253350   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:52.253372   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:52.253379   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:52.253382   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:52.255702   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:52.753469   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:52.753492   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:52.753500   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:52.753504   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:52.755375   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0621 18:44:52.755473   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:53.254058   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:53.254079   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:53.254086   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:53.254089   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:53.257691   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:44:53.753362   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:53.753384   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:53.753392   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:53.753397   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:53.756165   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:54.253900   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:54.253924   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:54.253936   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:54.253941   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:54.258836   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0621 18:44:54.753503   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:54.753531   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:54.753543   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:54.753550   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:54.756079   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:54.756230   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:55.253852   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:55.253878   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:55.253888   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:55.253893   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:55.257360   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:44:55.753655   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:55.753677   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:55.753685   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:55.753690   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:55.755813   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:56.253479   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:56.253502   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:56.253510   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:56.253514   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:56.256268   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:56.754037   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:56.754060   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:56.754067   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:56.754070   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:56.756632   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:56.756724   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:57.253331   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:57.253354   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:57.253366   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:57.253370   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:57.255914   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:57.753607   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:57.753633   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:57.753644   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:57.753652   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:57.755812   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:58.253531   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:58.253555   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:58.253566   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:58.253572   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:58.255850   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:58.753512   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:58.753538   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:58.753549   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:58.753555   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:58.755710   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:59.253408   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:59.253430   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:59.253437   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:59.253441   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:59.255930   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:44:59.256041   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:44:59.753599   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:44:59.753627   35235 round_trippers.go:469] Request Headers:
	I0621 18:44:59.753638   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:44:59.753645   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:44:59.756229   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:00.253985   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:00.254015   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:00.254025   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:00.254032   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:00.256308   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:00.753269   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:00.753302   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:00.753313   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:00.753318   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:00.756104   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:01.253837   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:01.253859   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:01.253866   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:01.253870   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:01.255961   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:01.256081   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:01.753756   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:01.753780   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:01.753788   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:01.753793   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:01.756409   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:02.253106   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:02.253130   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:02.253138   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:02.253142   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:02.255833   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:02.753652   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:02.753676   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:02.753684   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:02.753689   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:02.756269   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:03.254022   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:03.254046   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:03.254054   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:03.254058   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:03.256878   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:03.257002   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:03.753403   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:03.753427   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:03.753435   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:03.753439   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:03.756396   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:04.254152   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:04.254175   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:04.254183   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:04.254188   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:04.256522   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:04.753243   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:04.753267   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:04.753275   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:04.753279   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:04.755884   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:05.253582   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:05.253605   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:05.253613   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:05.253616   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:05.256501   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:05.753770   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:05.753809   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:05.753820   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:05.753826   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:05.756343   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:05.756444   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:06.254108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:06.254134   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:06.254145   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:06.254153   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:06.256487   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:06.753139   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:06.753157   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:06.753165   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:06.753169   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:06.755898   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:07.253573   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:07.253597   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:07.253605   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:07.253609   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:07.256047   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:07.753861   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:07.753884   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:07.753891   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:07.753895   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:07.756234   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:08.254004   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:08.254028   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:08.254035   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:08.254039   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:08.256478   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:08.256592   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:08.753176   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:08.753198   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:08.753207   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:08.753213   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:08.755734   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:09.253450   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:09.253472   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:09.253480   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:09.253484   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:09.257716   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0621 18:45:09.753430   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:09.753460   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:09.753470   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:09.753478   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:09.758419   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0621 18:45:10.253123   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:10.253150   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:10.253160   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:10.253166   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:10.255214   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:10.754108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:10.754137   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:10.754149   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:10.754154   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:10.756647   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:10.756759   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:11.253341   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:11.253365   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:11.253372   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:11.253375   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:11.259819   35235 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0621 18:45:11.753498   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:11.753523   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:11.753529   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:11.753532   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:11.756024   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:12.253755   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:12.253775   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:12.253782   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:12.253785   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:12.255827   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:12.753616   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:12.753642   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:12.753653   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:12.753659   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:12.756051   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:13.253856   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:13.253880   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:13.253887   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:13.253892   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:13.256135   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:13.256236   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:13.753934   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:13.753958   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:13.753965   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:13.753975   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:13.756256   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:14.254028   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:14.254049   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:14.254056   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:14.254060   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:14.256641   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:14.753330   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:14.753355   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:14.753368   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:14.753375   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:14.756085   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:15.253839   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:15.253861   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:15.253869   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:15.253873   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:15.256068   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:15.753228   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:15.753256   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:15.753267   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:15.753274   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:15.755958   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:15.756073   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:16.253623   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:16.253648   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:16.253660   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:16.253665   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:16.255941   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:16.753611   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:16.753636   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:16.753644   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:16.753647   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:16.755948   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:17.253748   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:17.253772   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:17.253779   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:17.253782   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:17.256366   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:17.754133   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:17.754157   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:17.754164   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:17.754168   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:17.756642   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:17.756751   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:18.253314   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:18.253337   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:18.253345   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:18.253349   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:18.255719   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:18.753392   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:18.753415   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:18.753422   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:18.753426   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:18.755755   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:19.253431   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:19.253454   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:19.253462   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:19.253465   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:19.256052   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:19.753815   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:19.753837   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:19.753845   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:19.753848   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:19.756221   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:20.254007   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:20.254037   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:20.254050   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:20.254058   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:20.256384   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:20.256490   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:20.753085   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:20.753105   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:20.753113   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:20.753117   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:20.755251   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:21.254043   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:21.254069   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:21.254079   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:21.254085   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:21.255768   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0621 18:45:21.753445   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:21.753468   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:21.753476   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:21.753484   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:21.759645   35235 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0621 18:45:22.253316   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:22.253343   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:22.253352   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:22.253357   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:22.255259   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0621 18:45:22.754058   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:22.754082   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:22.754090   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:22.754093   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:22.756412   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:22.756551   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:23.253136   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:23.253160   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:23.253168   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:23.253175   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:23.255457   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:23.753140   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:23.753161   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:23.753167   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:23.753176   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:23.755402   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:24.253097   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:24.253119   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:24.253126   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:24.253130   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:24.256175   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:45:24.753993   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:24.754017   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:24.754028   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:24.754034   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:24.756375   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:25.254140   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:25.254162   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:25.254170   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:25.254175   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:25.256565   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:25.256661   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:25.753651   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:25.753684   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:25.753696   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:25.753701   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:25.757005   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0621 18:45:26.253751   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:26.253776   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:26.253784   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:26.253788   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:26.256361   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:26.754109   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:26.754131   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:26.754138   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:26.754148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:26.756397   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:27.254152   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:27.254177   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:27.254184   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:27.254188   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:27.256320   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:27.754068   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:27.754091   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:27.754097   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:27.754101   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:27.756571   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:27.756693   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:28.253240   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:28.253261   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:28.253270   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:28.253274   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:28.255463   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:28.753124   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:28.753146   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:28.753154   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:28.753157   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:28.755517   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:29.253209   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:29.253230   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:29.253240   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:29.253247   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:29.255668   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:29.753349   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:29.753371   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:29.753380   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:29.753385   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:29.755660   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:30.253379   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:30.253400   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:30.253409   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:30.253415   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:30.256048   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:30.256143   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:30.753921   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:30.753943   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:30.753965   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:30.753969   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:30.756730   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:31.253201   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:31.253226   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:31.253233   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:31.253238   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:31.256153   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:31.754019   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:31.754050   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:31.754061   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:31.754067   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:31.756429   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:32.253128   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:32.253153   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:32.253164   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:32.253169   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:32.255755   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:32.753493   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:32.753514   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:32.753521   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:32.753525   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:32.755977   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:32.756091   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:33.253724   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:33.253746   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:33.253756   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:33.253760   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:33.256314   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:33.754057   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:33.754082   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:33.754092   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:33.754098   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:33.756557   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:34.253231   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:34.253258   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:34.253268   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:34.253272   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:34.255728   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:34.753415   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:34.753440   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:34.753453   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:34.753461   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:34.755841   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:35.253551   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:35.253582   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:35.253593   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:35.253599   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:35.256278   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:35.256387   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:35.753300   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:35.753327   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:35.753337   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:35.753341   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:35.756209   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:36.253989   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:36.254015   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:36.254026   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:36.254034   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:36.256097   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:36.753872   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:36.753901   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:36.753912   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:36.753921   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:36.756059   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:37.253848   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:37.253871   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:37.253880   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:37.253884   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:37.256493   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:37.256590   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:37.753156   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:37.753178   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:37.753186   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:37.753192   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:37.755149   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0621 18:45:38.253771   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:38.253794   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:38.253825   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:38.253830   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:38.256160   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:38.753955   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:38.753985   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:38.753992   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:38.753997   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:38.756347   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:39.254098   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:39.254122   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:39.254129   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:39.254136   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:39.256402   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:39.754126   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:39.754149   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:39.754157   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:39.754161   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:39.756436   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:39.756550   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:40.253130   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:40.253152   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:40.253159   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:40.253163   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:40.255680   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:40.753528   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:40.753555   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:40.753565   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:40.753570   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:40.756173   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:41.253963   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:41.253994   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:41.254005   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:41.254009   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:41.256275   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:41.754083   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:41.754106   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:41.754113   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:41.754117   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:41.756504   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:41.756596   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
	I0621 18:45:42.253204   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:42.253229   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:42.253237   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:42.253241   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:42.255314   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:42.753088   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
	I0621 18:45:42.753119   35235 round_trippers.go:469] Request Headers:
	I0621 18:45:42.753134   35235 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:45:42.753140   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:45:42.755605   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0621 18:45:42.755728   35235 node_ready.go:38] duration metric: took 4m0.002771633s for node "ha-406291-m02" to be "Ready" ...
	I0621 18:45:42.757939   35235 out.go:177] 
	W0621 18:45:42.759451   35235 out.go:239] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0621 18:45:42.759470   35235 out.go:239] * 
	* 
	W0621 18:45:42.761346   35235 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:45:42.762830   35235 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0621 18:41:25.398760   35235 out.go:291] Setting OutFile to fd 1 ...
I0621 18:41:25.399080   35235 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:41:25.399091   35235 out.go:304] Setting ErrFile to fd 2...
I0621 18:41:25.399098   35235 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:41:25.399369   35235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
I0621 18:41:25.399643   35235 mustload.go:65] Loading cluster: ha-406291
I0621 18:41:25.399990   35235 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:41:25.400377   35235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:41:25.400435   35235 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:41:25.416004   35235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46545
I0621 18:41:25.416429   35235 main.go:141] libmachine: () Calling .GetVersion
I0621 18:41:25.416961   35235 main.go:141] libmachine: Using API Version  1
I0621 18:41:25.416994   35235 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:41:25.417346   35235 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:41:25.417542   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
W0621 18:41:25.419150   35235 host.go:58] "ha-406291-m02" host status: Stopped
I0621 18:41:25.421208   35235 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
I0621 18:41:25.422261   35235 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
I0621 18:41:25.422293   35235 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
I0621 18:41:25.422308   35235 cache.go:56] Caching tarball of preloaded images
I0621 18:41:25.422393   35235 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0621 18:41:25.422403   35235 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
I0621 18:41:25.422505   35235 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
I0621 18:41:25.422683   35235 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0621 18:41:25.422738   35235 start.go:364] duration metric: took 23.154µs to acquireMachinesLock for "ha-406291-m02"
I0621 18:41:25.422752   35235 start.go:96] Skipping create...Using existing machine configuration
I0621 18:41:25.422757   35235 fix.go:54] fixHost starting: m02
I0621 18:41:25.423048   35235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:41:25.423074   35235 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:41:25.437434   35235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
I0621 18:41:25.437837   35235 main.go:141] libmachine: () Calling .GetVersion
I0621 18:41:25.438264   35235 main.go:141] libmachine: Using API Version  1
I0621 18:41:25.438278   35235 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:41:25.438564   35235 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:41:25.438730   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
I0621 18:41:25.438886   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
I0621 18:41:25.440234   35235 fix.go:112] recreateIfNeeded on ha-406291-m02: state=Stopped err=<nil>
I0621 18:41:25.440295   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
W0621 18:41:25.440441   35235 fix.go:138] unexpected machine state, will restart: <nil>
I0621 18:41:25.442413   35235 out.go:177] * Restarting existing kvm2 VM for "ha-406291-m02" ...
I0621 18:41:25.443676   35235 main.go:141] libmachine: (ha-406291-m02) Calling .Start
I0621 18:41:25.443837   35235 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
I0621 18:41:25.444437   35235 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
I0621 18:41:25.444763   35235 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
I0621 18:41:25.445119   35235 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
I0621 18:41:25.445841   35235 main.go:141] libmachine: (ha-406291-m02) Creating domain...
I0621 18:41:26.648629   35235 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
I0621 18:41:26.649534   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:26.650008   35235 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
I0621 18:41:26.650026   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:26.650037   35235 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
I0621 18:41:26.650588   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:26.650607   35235 main.go:141] libmachine: (ha-406291-m02) DBG | skip adding static IP to network mk-ha-406291 - found existing host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"}
I0621 18:41:26.650621   35235 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
I0621 18:41:26.650637   35235 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
I0621 18:41:26.650654   35235 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
I0621 18:41:26.653892   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:26.654349   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:26.654382   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:26.654602   35235 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
I0621 18:41:26.654628   35235 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
I0621 18:41:26.654681   35235 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0621 18:41:26.654709   35235 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
I0621 18:41:26.654721   35235 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
I0621 18:41:37.793973   35235 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
I0621 18:41:37.794425   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
I0621 18:41:37.795106   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
I0621 18:41:37.798128   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:37.798622   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:37.798650   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:37.798904   35235 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
I0621 18:41:37.799103   35235 machine.go:94] provisionDockerMachine start ...
I0621 18:41:37.799122   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
I0621 18:41:37.799339   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:37.801643   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:37.802142   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:37.802200   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:37.802300   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:37.802497   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:37.802687   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:37.802845   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:37.803037   35235 main.go:141] libmachine: Using SSH client type: native
I0621 18:41:37.803303   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
I0621 18:41:37.803374   35235 main.go:141] libmachine: About to run SSH command:
hostname
I0621 18:41:37.906057   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0621 18:41:37.906090   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
I0621 18:41:37.906330   35235 buildroot.go:166] provisioning hostname "ha-406291-m02"
I0621 18:41:37.906355   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
I0621 18:41:37.906532   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:37.909848   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:37.910259   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:37.910277   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:37.910445   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:37.910640   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:37.910792   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:37.910957   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:37.911135   35235 main.go:141] libmachine: Using SSH client type: native
I0621 18:41:37.911337   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
I0621 18:41:37.911350   35235 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
I0621 18:41:38.045519   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02

                                                
                                                
I0621 18:41:38.045548   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:38.048010   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.048407   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:38.048430   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.048622   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:38.048815   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:38.048976   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:38.049122   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:38.049278   35235 main.go:141] libmachine: Using SSH client type: native
I0621 18:41:38.049470   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
I0621 18:41:38.049495   35235 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0621 18:41:38.162214   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0621 18:41:38.162247   35235 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
I0621 18:41:38.162272   35235 buildroot.go:174] setting up certificates
I0621 18:41:38.162295   35235 provision.go:84] configureAuth start
I0621 18:41:38.162307   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
I0621 18:41:38.162563   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
I0621 18:41:38.165407   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.165831   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:38.165862   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.166002   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:38.168297   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.168630   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:38.168656   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.168861   35235 provision.go:143] copyHostCerts
I0621 18:41:38.168886   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
I0621 18:41:38.168929   35235 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
I0621 18:41:38.168941   35235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
I0621 18:41:38.169002   35235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
I0621 18:41:38.169072   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
I0621 18:41:38.169093   35235 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
I0621 18:41:38.169100   35235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
I0621 18:41:38.169128   35235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
I0621 18:41:38.169168   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
I0621 18:41:38.169185   35235 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
I0621 18:41:38.169191   35235 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
I0621 18:41:38.169212   35235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
I0621 18:41:38.169255   35235 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
I0621 18:41:38.339099   35235 provision.go:177] copyRemoteCerts
I0621 18:41:38.339154   35235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0621 18:41:38.339175   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:38.342201   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.342572   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:38.342600   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.342797   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:38.342986   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:38.343175   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:38.343285   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
I0621 18:41:38.423299   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0621 18:41:38.423361   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0621 18:41:38.446136   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
I0621 18:41:38.446198   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0621 18:41:38.468132   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0621 18:41:38.468213   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0621 18:41:38.489937   35235 provision.go:87] duration metric: took 327.620634ms to configureAuth
I0621 18:41:38.489968   35235 buildroot.go:189] setting minikube options for container-runtime
I0621 18:41:38.490253   35235 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:41:38.490353   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:38.492907   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.493300   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:38.493332   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.493490   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:38.493676   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:38.493857   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:38.493965   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:38.494129   35235 main.go:141] libmachine: Using SSH client type: native
I0621 18:41:38.494335   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
I0621 18:41:38.494363   35235 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0621 18:41:38.745084   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

                                                
                                                
I0621 18:41:38.745110   35235 machine.go:97] duration metric: took 945.994237ms to provisionDockerMachine
I0621 18:41:38.745120   35235 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
I0621 18:41:38.745129   35235 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0621 18:41:38.745144   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
I0621 18:41:38.745474   35235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0621 18:41:38.745498   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:38.748247   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.748661   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:38.748683   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.748882   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:38.749061   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:38.749235   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:38.749372   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
I0621 18:41:38.831959   35235 ssh_runner.go:195] Run: cat /etc/os-release
I0621 18:41:38.835928   35235 info.go:137] Remote host: Buildroot 2023.02.9
I0621 18:41:38.835957   35235 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
I0621 18:41:38.836032   35235 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
I0621 18:41:38.836116   35235 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
I0621 18:41:38.836126   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
I0621 18:41:38.836212   35235 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0621 18:41:38.844773   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
I0621 18:41:38.867692   35235 start.go:296] duration metric: took 122.557034ms for postStartSetup
I0621 18:41:38.867740   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
I0621 18:41:38.868006   35235 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0621 18:41:38.868031   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:38.870401   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.870690   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:38.870735   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:38.870878   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:38.871049   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:38.871179   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:38.871317   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
I0621 18:41:38.951453   35235 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0621 18:41:38.951568   35235 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0621 18:41:39.007442   35235 fix.go:56] duration metric: took 13.584678445s for fixHost
I0621 18:41:39.007487   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:39.010155   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:39.010551   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:39.010579   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:39.010735   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:39.010956   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:39.011091   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:39.011224   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:39.011345   35235 main.go:141] libmachine: Using SSH client type: native
I0621 18:41:39.011500   35235 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
I0621 18:41:39.011507   35235 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0621 18:41:39.114136   35235 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718995299.079723478

                                                
                                                
I0621 18:41:39.114155   35235 fix.go:216] guest clock: 1718995299.079723478
I0621 18:41:39.114162   35235 fix.go:229] Guest: 2024-06-21 18:41:39.079723478 +0000 UTC Remote: 2024-06-21 18:41:39.007467135 +0000 UTC m=+13.642581494 (delta=72.256343ms)
I0621 18:41:39.114178   35235 fix.go:200] guest clock delta is within tolerance: 72.256343ms
I0621 18:41:39.114183   35235 start.go:83] releasing machines lock for "ha-406291-m02", held for 13.691435613s
I0621 18:41:39.114199   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
I0621 18:41:39.114511   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
I0621 18:41:39.117074   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:39.117429   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:39.117452   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:39.117607   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
I0621 18:41:39.118097   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
I0621 18:41:39.118279   35235 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
I0621 18:41:39.118373   35235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0621 18:41:39.118417   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:39.118456   35235 ssh_runner.go:195] Run: systemctl --version
I0621 18:41:39.118475   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
I0621 18:41:39.121123   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:39.121389   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:39.121534   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:39.121575   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:39.121700   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:39.121810   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:39.121835   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:39.121877   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:39.121972   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
I0621 18:41:39.122036   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:39.122113   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
I0621 18:41:39.122179   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
I0621 18:41:39.122232   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
I0621 18:41:39.122333   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
I0621 18:41:39.234301   35235 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I0621 18:41:39.382103   35235 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0621 18:41:39.387475   35235 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0621 18:41:39.387529   35235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0621 18:41:39.403693   35235 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0621 18:41:39.403716   35235 start.go:494] detecting cgroup driver to use...
I0621 18:41:39.403769   35235 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0621 18:41:39.418555   35235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0621 18:41:39.431995   35235 docker.go:217] disabling cri-docker service (if available) ...
I0621 18:41:39.432045   35235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0621 18:41:39.446478   35235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0621 18:41:39.459421   35235 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0621 18:41:39.576198   35235 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0621 18:41:39.736545   35235 docker.go:233] disabling docker service ...
I0621 18:41:39.736599   35235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0621 18:41:39.753584   35235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0621 18:41:39.766092   35235 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0621 18:41:39.893661   35235 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0621 18:41:40.007219   35235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0621 18:41:40.020987   35235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0621 18:41:40.038350   35235 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
I0621 18:41:40.038424   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
I0621 18:41:40.048698   35235 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I0621 18:41:40.048765   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I0621 18:41:40.058773   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I0621 18:41:40.069126   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I0621 18:41:40.079117   35235 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0621 18:41:40.089592   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I0621 18:41:40.099897   35235 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I0621 18:41:40.116226   35235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I0621 18:41:40.126293   35235 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0621 18:41:40.135067   35235 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:

                                                
                                                
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0621 18:41:40.135110   35235 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0621 18:41:40.146762   35235 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0621 18:41:40.155796   35235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0621 18:41:40.273955   35235 ssh_runner.go:195] Run: sudo systemctl restart crio
I0621 18:41:40.400262   35235 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
I0621 18:41:40.400366   35235 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0621 18:41:40.404705   35235 start.go:562] Will wait 60s for crictl version
I0621 18:41:40.404761   35235 ssh_runner.go:195] Run: which crictl
I0621 18:41:40.408123   35235 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0621 18:41:40.446718   35235 start.go:578] Version:  0.1.0
RuntimeName:  cri-o
RuntimeVersion:  1.29.1
RuntimeApiVersion:  v1
I0621 18:41:40.446815   35235 ssh_runner.go:195] Run: crio --version
I0621 18:41:40.474349   35235 ssh_runner.go:195] Run: crio --version
I0621 18:41:40.503870   35235 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
I0621 18:41:40.505110   35235 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
I0621 18:41:40.507792   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:40.508197   35235 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
I0621 18:41:40.508224   35235 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
I0621 18:41:40.508442   35235 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
I0621 18:41:40.512330   35235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0621 18:41:40.525820   35235 mustload.go:65] Loading cluster: ha-406291
I0621 18:41:40.526028   35235 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:41:40.526283   35235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:41:40.526322   35235 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:41:40.541570   35235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
I0621 18:41:40.541991   35235 main.go:141] libmachine: () Calling .GetVersion
I0621 18:41:40.542495   35235 main.go:141] libmachine: Using API Version  1
I0621 18:41:40.542516   35235 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:41:40.542896   35235 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:41:40.543056   35235 main.go:141] libmachine: (ha-406291) Calling .GetState
I0621 18:41:40.544520   35235 host.go:66] Checking if "ha-406291" exists ...
I0621 18:41:40.544793   35235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:41:40.544828   35235 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:41:40.560583   35235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
I0621 18:41:40.561329   35235 main.go:141] libmachine: () Calling .GetVersion
I0621 18:41:40.561819   35235 main.go:141] libmachine: Using API Version  1
I0621 18:41:40.561836   35235 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:41:40.562384   35235 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:41:40.562527   35235 main.go:141] libmachine: (ha-406291) Calling .DriverName
I0621 18:41:40.562657   35235 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
I0621 18:41:40.562669   35235 certs.go:194] generating shared ca certs ...
I0621 18:41:40.562685   35235 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0621 18:41:40.562843   35235 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
I0621 18:41:40.562890   35235 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
I0621 18:41:40.562900   35235 certs.go:256] generating profile certs ...
I0621 18:41:40.562983   35235 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
I0621 18:41:40.563075   35235 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
I0621 18:41:40.563124   35235 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
I0621 18:41:40.563136   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0621 18:41:40.563154   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0621 18:41:40.563173   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0621 18:41:40.563187   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0621 18:41:40.563202   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0621 18:41:40.563216   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0621 18:41:40.563229   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0621 18:41:40.563255   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0621 18:41:40.563312   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
W0621 18:41:40.563349   35235 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
I0621 18:41:40.563363   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
I0621 18:41:40.563391   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
I0621 18:41:40.563417   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
I0621 18:41:40.563444   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
I0621 18:41:40.563483   35235 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
I0621 18:41:40.563515   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
I0621 18:41:40.563530   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
I0621 18:41:40.563544   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0621 18:41:40.563570   35235 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
I0621 18:41:40.566480   35235 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
I0621 18:41:40.566890   35235 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
I0621 18:41:40.566916   35235 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
I0621 18:41:40.567069   35235 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
I0621 18:41:40.567250   35235 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
I0621 18:41:40.567419   35235 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
I0621 18:41:40.567600   35235 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
I0621 18:41:40.634158   35235 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
I0621 18:41:40.639816   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0621 18:41:40.650182   35235 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
I0621 18:41:40.653818   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0621 18:41:40.663383   35235 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
I0621 18:41:40.667143   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0621 18:41:40.676946   35235 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
I0621 18:41:40.681196   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I0621 18:41:40.692687   35235 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
I0621 18:41:40.696635   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0621 18:41:40.706519   35235 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
I0621 18:41:40.710548   35235 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
I0621 18:41:40.721073   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0621 18:41:40.744850   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0621 18:41:40.770255   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0621 18:41:40.793885   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0621 18:41:40.818855   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0621 18:41:40.842932   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0621 18:41:40.864560   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0621 18:41:40.887186   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0621 18:41:40.908943   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
I0621 18:41:40.930236   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
I0621 18:41:40.952389   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0621 18:41:40.973993   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0621 18:41:40.989089   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0621 18:41:41.004282   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0621 18:41:41.019635   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I0621 18:41:41.040987   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0621 18:41:41.058089   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
I0621 18:41:41.073644   35235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0621 18:41:41.090884   35235 ssh_runner.go:195] Run: openssl version
I0621 18:41:41.096367   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
I0621 18:41:41.107820   35235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
I0621 18:41:41.111708   35235 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
I0621 18:41:41.111759   35235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
I0621 18:41:41.116944   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
I0621 18:41:41.126635   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0621 18:41:41.136550   35235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0621 18:41:41.140357   35235 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
I0621 18:41:41.140410   35235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0621 18:41:41.145589   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0621 18:41:41.155902   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
I0621 18:41:41.166054   35235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
I0621 18:41:41.170212   35235 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
I0621 18:41:41.170271   35235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
I0621 18:41:41.175431   35235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
I0621 18:41:41.186311   35235 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0621 18:41:41.190080   35235 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0621 18:41:41.190134   35235 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
I0621 18:41:41.190237   35235 kubeadm.go:940] kubelet [Unit]
Wants=crio.service

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0621 18:41:41.190265   35235 kube-vip.go:115] generating kube-vip config ...
I0621 18:41:41.190293   35235 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0621 18:41:41.205204   35235 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0621 18:41:41.205325   35235 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0621 18:41:41.205385   35235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0621 18:41:41.216597   35235 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubelet
Initiating transfer...
I0621 18:41:41.216648   35235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
I0621 18:41:41.225943   35235 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
I0621 18:41:41.225951   35235 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
I0621 18:41:41.225944   35235 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
I0621 18:41:41.225984   35235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0621 18:41:41.225996   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
I0621 18:41:41.225972   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
I0621 18:41:41.226088   35235 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
I0621 18:41:41.226158   35235 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
I0621 18:41:41.239628   35235 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
I0621 18:41:41.239672   35235 ssh_runner.go:356] copy: skipping /var/lib/minikube/binaries/v1.30.2/kubectl (exists)
I0621 18:41:41.239707   35235 ssh_runner.go:356] copy: skipping /var/lib/minikube/binaries/v1.30.2/kubeadm (exists)
I0621 18:41:41.239727   35235 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
I0621 18:41:41.243410   35235 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
I0621 18:41:41.243438   35235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
I0621 18:41:41.698257   35235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I0621 18:41:41.707723   35235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0621 18:41:41.724639   35235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0621 18:41:41.743101   35235 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
I0621 18:41:41.760944   35235 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
I0621 18:41:41.764890   35235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0621 18:41:41.776520   35235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0621 18:41:41.886856   35235 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0621 18:41:41.903133   35235 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I0621 18:41:41.903253   35235 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0621 18:41:41.903431   35235 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:41:41.905550   35235 out.go:177] * Enabled addons: 
I0621 18:41:41.905576   35235 out.go:177] * Verifying Kubernetes components...
I0621 18:41:41.906878   35235 addons.go:510] duration metric: took 3.645796ms for enable addons: enabled=[]
I0621 18:41:41.907017   35235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0621 18:41:42.037375   35235 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0621 18:41:42.751725   35235 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
I0621 18:41:42.752154   35235 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0621 18:41:42.752273   35235 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.198:8443
I0621 18:41:42.752738   35235 cert_rotation.go:137] Starting client certificate rotation controller
I0621 18:41:42.752934   35235 node_ready.go:35] waiting up to 6m0s for node "ha-406291-m02" to be "Ready" ...
I0621 18:41:42.753014   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:42.753026   35235 round_trippers.go:469] Request Headers:
I0621 18:41:42.753035   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:42.753044   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:42.761710   35235 round_trippers.go:574] Response Status: 404 Not Found in 8 milliseconds
I0621 18:41:43.253361   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:43.253384   35235 round_trippers.go:469] Request Headers:
I0621 18:41:43.253392   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:43.253397   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:43.255457   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:43.753171   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:43.753205   35235 round_trippers.go:469] Request Headers:
I0621 18:41:43.753214   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:43.753218   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:43.755464   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:44.253985   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:44.254017   35235 round_trippers.go:469] Request Headers:
I0621 18:41:44.254028   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:44.254033   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:44.256556   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:44.753160   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:44.753190   35235 round_trippers.go:469] Request Headers:
I0621 18:41:44.753199   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:44.753207   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:44.755509   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:44.755615   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:41:45.253212   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:45.253234   35235 round_trippers.go:469] Request Headers:
I0621 18:41:45.253242   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:45.253245   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:45.255313   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:45.753311   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:45.753333   35235 round_trippers.go:469] Request Headers:
I0621 18:41:45.753340   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:45.753344   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:45.756039   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:46.253908   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:46.253976   35235 round_trippers.go:469] Request Headers:
I0621 18:41:46.253991   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:46.253997   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:46.259190   35235 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
I0621 18:41:46.753761   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:46.753783   35235 round_trippers.go:469] Request Headers:
I0621 18:41:46.753791   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:46.753808   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:46.756233   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:46.756359   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:41:47.254003   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:47.254033   35235 round_trippers.go:469] Request Headers:
I0621 18:41:47.254044   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:47.254050   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:47.256388   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:47.754169   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:47.754190   35235 round_trippers.go:469] Request Headers:
I0621 18:41:47.754198   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:47.754203   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:47.756582   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:48.253249   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:48.253269   35235 round_trippers.go:469] Request Headers:
I0621 18:41:48.253276   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:48.253282   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:48.255342   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:48.754157   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:48.754184   35235 round_trippers.go:469] Request Headers:
I0621 18:41:48.754195   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:48.754201   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:48.757057   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:48.757168   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:41:49.253926   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:49.253948   35235 round_trippers.go:469] Request Headers:
I0621 18:41:49.253955   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:49.253959   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:49.256216   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:49.753944   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:49.753976   35235 round_trippers.go:469] Request Headers:
I0621 18:41:49.753985   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:49.753989   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:49.756138   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:50.253937   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:50.253959   35235 round_trippers.go:469] Request Headers:
I0621 18:41:50.253967   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:50.253973   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:50.256423   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:50.753631   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:50.753672   35235 round_trippers.go:469] Request Headers:
I0621 18:41:50.753680   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:50.753684   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:50.755842   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:51.253486   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:51.253508   35235 round_trippers.go:469] Request Headers:
I0621 18:41:51.253516   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:51.253520   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:51.255914   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:51.256023   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:41:51.753640   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:51.753668   35235 round_trippers.go:469] Request Headers:
I0621 18:41:51.753679   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:51.753687   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:51.756525   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:52.253151   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:52.253175   35235 round_trippers.go:469] Request Headers:
I0621 18:41:52.253185   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:52.253191   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:52.255417   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:52.753719   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:52.753744   35235 round_trippers.go:469] Request Headers:
I0621 18:41:52.753752   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:52.753756   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:52.756141   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:53.253574   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:53.253594   35235 round_trippers.go:469] Request Headers:
I0621 18:41:53.253601   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:53.253606   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:53.255928   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:53.256044   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:41:53.753598   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:53.753618   35235 round_trippers.go:469] Request Headers:
I0621 18:41:53.753626   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:53.753630   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:53.756090   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:54.253878   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:54.253900   35235 round_trippers.go:469] Request Headers:
I0621 18:41:54.253908   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:54.253911   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:54.256262   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:54.753183   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:54.753215   35235 round_trippers.go:469] Request Headers:
I0621 18:41:54.753225   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:54.753229   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:54.757306   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I0621 18:41:55.254087   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:55.254109   35235 round_trippers.go:469] Request Headers:
I0621 18:41:55.254116   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:55.254120   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:55.256290   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:55.256391   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:41:55.753269   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:55.753293   35235 round_trippers.go:469] Request Headers:
I0621 18:41:55.753300   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:55.753304   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:55.755737   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:56.253447   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:56.253496   35235 round_trippers.go:469] Request Headers:
I0621 18:41:56.253507   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:56.253513   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:56.255797   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:56.753462   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:56.753489   35235 round_trippers.go:469] Request Headers:
I0621 18:41:56.753498   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:56.753509   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:56.755610   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:57.253266   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:57.253286   35235 round_trippers.go:469] Request Headers:
I0621 18:41:57.253293   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:57.253302   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:57.255333   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:57.754092   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:57.754113   35235 round_trippers.go:469] Request Headers:
I0621 18:41:57.754121   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:57.754125   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:57.756587   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:57.756713   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:41:58.253252   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:58.253277   35235 round_trippers.go:469] Request Headers:
I0621 18:41:58.253293   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:58.253299   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:58.255468   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:58.753160   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:58.753184   35235 round_trippers.go:469] Request Headers:
I0621 18:41:58.753192   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:58.753195   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:58.755547   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:59.253241   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:59.253276   35235 round_trippers.go:469] Request Headers:
I0621 18:41:59.253287   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:59.253291   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:59.255669   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:41:59.753367   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:41:59.753392   35235 round_trippers.go:469] Request Headers:
I0621 18:41:59.753401   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:41:59.753407   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:41:59.755615   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:00.253267   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:00.253399   35235 round_trippers.go:469] Request Headers:
I0621 18:42:00.253557   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:00.253571   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:00.256856   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:42:00.256949   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:00.753594   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:00.753633   35235 round_trippers.go:469] Request Headers:
I0621 18:42:00.753643   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:00.753647   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:00.756443   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:01.253121   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:01.253143   35235 round_trippers.go:469] Request Headers:
I0621 18:42:01.253150   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:01.253156   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:01.255464   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:01.753187   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:01.753225   35235 round_trippers.go:469] Request Headers:
I0621 18:42:01.753238   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:01.753244   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:01.755643   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:02.253356   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:02.253378   35235 round_trippers.go:469] Request Headers:
I0621 18:42:02.253387   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:02.253391   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:02.256121   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:02.753904   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:02.753934   35235 round_trippers.go:469] Request Headers:
I0621 18:42:02.753942   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:02.753947   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:02.756015   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:02.756101   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:03.253925   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:03.253959   35235 round_trippers.go:469] Request Headers:
I0621 18:42:03.253970   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:03.253974   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:03.256199   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:03.753971   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:03.753997   35235 round_trippers.go:469] Request Headers:
I0621 18:42:03.754007   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:03.754012   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:03.756158   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:04.253963   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:04.253985   35235 round_trippers.go:469] Request Headers:
I0621 18:42:04.253993   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:04.253997   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:04.256107   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:04.753868   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:04.753891   35235 round_trippers.go:469] Request Headers:
I0621 18:42:04.753899   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:04.753902   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:04.758115   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I0621 18:42:04.758305   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:05.253485   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:05.253509   35235 round_trippers.go:469] Request Headers:
I0621 18:42:05.253516   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:05.253521   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:05.255980   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:05.754131   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:05.754153   35235 round_trippers.go:469] Request Headers:
I0621 18:42:05.754161   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:05.754166   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:05.756385   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:06.253134   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:06.253163   35235 round_trippers.go:469] Request Headers:
I0621 18:42:06.253171   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:06.253176   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:06.255582   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:06.753260   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:06.753301   35235 round_trippers.go:469] Request Headers:
I0621 18:42:06.753310   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:06.753316   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:06.755505   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:07.253240   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:07.253276   35235 round_trippers.go:469] Request Headers:
I0621 18:42:07.253288   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:07.253293   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:07.255461   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:07.255567   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:07.753165   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:07.753186   35235 round_trippers.go:469] Request Headers:
I0621 18:42:07.753193   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:07.753197   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:07.755449   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:08.253180   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:08.253203   35235 round_trippers.go:469] Request Headers:
I0621 18:42:08.253210   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:08.253214   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:08.255478   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:08.753122   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:08.753144   35235 round_trippers.go:469] Request Headers:
I0621 18:42:08.753150   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:08.753154   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:08.755775   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:09.253414   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:09.253446   35235 round_trippers.go:469] Request Headers:
I0621 18:42:09.253454   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:09.253458   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:09.255954   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:09.256045   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:09.753642   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:09.753670   35235 round_trippers.go:469] Request Headers:
I0621 18:42:09.753681   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:09.753686   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:09.756626   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:10.253354   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:10.253383   35235 round_trippers.go:469] Request Headers:
I0621 18:42:10.253392   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:10.253398   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:10.255677   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:10.753063   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:10.753086   35235 round_trippers.go:469] Request Headers:
I0621 18:42:10.753093   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:10.753097   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:10.755029   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I0621 18:42:11.253774   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:11.253825   35235 round_trippers.go:469] Request Headers:
I0621 18:42:11.253838   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:11.253843   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:11.256408   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:11.256528   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:11.754151   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:11.754171   35235 round_trippers.go:469] Request Headers:
I0621 18:42:11.754179   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:11.754182   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:11.756541   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:12.253205   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:12.253229   35235 round_trippers.go:469] Request Headers:
I0621 18:42:12.253237   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:12.253244   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:12.257722   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I0621 18:42:12.753388   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:12.753417   35235 round_trippers.go:469] Request Headers:
I0621 18:42:12.753429   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:12.753436   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:12.755570   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:13.253250   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:13.253273   35235 round_trippers.go:469] Request Headers:
I0621 18:42:13.253281   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:13.253285   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:13.255704   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:13.753395   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:13.753423   35235 round_trippers.go:469] Request Headers:
I0621 18:42:13.753431   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:13.753436   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:13.756058   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:13.756196   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:14.253863   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:14.253887   35235 round_trippers.go:469] Request Headers:
I0621 18:42:14.253894   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:14.253899   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:14.256504   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:14.753198   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:14.753219   35235 round_trippers.go:469] Request Headers:
I0621 18:42:14.753227   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:14.753231   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:14.756110   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:15.253908   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:15.253953   35235 round_trippers.go:469] Request Headers:
I0621 18:42:15.253961   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:15.253966   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:15.256153   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:15.753330   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:15.753361   35235 round_trippers.go:469] Request Headers:
I0621 18:42:15.753373   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:15.753379   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:15.756028   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:16.253789   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:16.253837   35235 round_trippers.go:469] Request Headers:
I0621 18:42:16.253848   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:16.253854   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:16.256302   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:16.256407   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:16.754028   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:16.754060   35235 round_trippers.go:469] Request Headers:
I0621 18:42:16.754068   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:16.754074   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:16.756338   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:17.254142   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:17.254167   35235 round_trippers.go:469] Request Headers:
I0621 18:42:17.254179   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:17.254186   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:17.257058   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:17.753820   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:17.753845   35235 round_trippers.go:469] Request Headers:
I0621 18:42:17.753854   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:17.753859   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:17.756211   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:18.253941   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:18.253967   35235 round_trippers.go:469] Request Headers:
I0621 18:42:18.253979   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:18.253984   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:18.256278   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:18.754069   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:18.754094   35235 round_trippers.go:469] Request Headers:
I0621 18:42:18.754104   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:18.754111   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:18.757002   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:18.757131   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:19.253739   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:19.253762   35235 round_trippers.go:469] Request Headers:
I0621 18:42:19.253769   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:19.253778   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:19.256223   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:19.754025   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:19.754049   35235 round_trippers.go:469] Request Headers:
I0621 18:42:19.754058   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:19.754063   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:19.756690   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:20.253368   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:20.253390   35235 round_trippers.go:469] Request Headers:
I0621 18:42:20.253403   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:20.253407   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:20.256257   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:20.754183   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:20.754206   35235 round_trippers.go:469] Request Headers:
I0621 18:42:20.754216   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:20.754224   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:20.756539   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:21.253199   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:21.253220   35235 round_trippers.go:469] Request Headers:
I0621 18:42:21.253228   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:21.253233   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:21.255840   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:21.255936   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:21.753575   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:21.753603   35235 round_trippers.go:469] Request Headers:
I0621 18:42:21.753613   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:21.753619   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:21.755746   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:22.253402   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:22.253424   35235 round_trippers.go:469] Request Headers:
I0621 18:42:22.253431   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:22.253436   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:22.256162   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:22.753987   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:22.754007   35235 round_trippers.go:469] Request Headers:
I0621 18:42:22.754014   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:22.754021   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:22.756609   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:23.253300   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:23.253325   35235 round_trippers.go:469] Request Headers:
I0621 18:42:23.253333   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:23.253338   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:23.256293   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:23.256396   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:23.754045   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:23.754067   35235 round_trippers.go:469] Request Headers:
I0621 18:42:23.754075   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:23.754078   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:23.756374   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:24.254184   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:24.254207   35235 round_trippers.go:469] Request Headers:
I0621 18:42:24.254216   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:24.254220   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:24.256646   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:24.753347   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:24.753373   35235 round_trippers.go:469] Request Headers:
I0621 18:42:24.753385   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:24.753392   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:24.757869   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I0621 18:42:25.253523   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:25.253546   35235 round_trippers.go:469] Request Headers:
I0621 18:42:25.253553   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:25.253557   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:25.255919   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:25.754162   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:25.754188   35235 round_trippers.go:469] Request Headers:
I0621 18:42:25.754199   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:25.754205   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:25.757204   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:25.757300   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:26.253996   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:26.254023   35235 round_trippers.go:469] Request Headers:
I0621 18:42:26.254034   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:26.254039   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:26.256738   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:26.753420   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:26.753443   35235 round_trippers.go:469] Request Headers:
I0621 18:42:26.753450   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:26.753455   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:26.755671   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:27.253339   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:27.253364   35235 round_trippers.go:469] Request Headers:
I0621 18:42:27.253371   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:27.253375   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:27.256205   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:27.753997   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:27.754021   35235 round_trippers.go:469] Request Headers:
I0621 18:42:27.754026   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:27.754030   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:27.756311   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:28.254096   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:28.254119   35235 round_trippers.go:469] Request Headers:
I0621 18:42:28.254129   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:28.254136   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:28.256400   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:28.256508   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:28.753114   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:28.753142   35235 round_trippers.go:469] Request Headers:
I0621 18:42:28.753149   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:28.753152   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:28.755794   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:29.253467   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:29.253506   35235 round_trippers.go:469] Request Headers:
I0621 18:42:29.253515   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:29.253520   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:29.255937   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:29.753230   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:29.753253   35235 round_trippers.go:469] Request Headers:
I0621 18:42:29.753261   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:29.753264   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:29.755510   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:30.253160   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:30.253188   35235 round_trippers.go:469] Request Headers:
I0621 18:42:30.253199   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:30.253204   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:30.255843   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:30.753685   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:30.753706   35235 round_trippers.go:469] Request Headers:
I0621 18:42:30.753714   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:30.753718   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:30.756184   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:30.756306   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:31.253930   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:31.253958   35235 round_trippers.go:469] Request Headers:
I0621 18:42:31.253966   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:31.253970   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:31.256331   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:31.754108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:31.754136   35235 round_trippers.go:469] Request Headers:
I0621 18:42:31.754147   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:31.754153   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:31.756842   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:32.253126   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:32.253145   35235 round_trippers.go:469] Request Headers:
I0621 18:42:32.253153   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:32.253157   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:32.255626   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:32.753394   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:32.753423   35235 round_trippers.go:469] Request Headers:
I0621 18:42:32.753436   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:32.753441   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:32.759766   35235 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
I0621 18:42:32.759867   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:33.253454   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:33.253477   35235 round_trippers.go:469] Request Headers:
I0621 18:42:33.253486   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:33.253493   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:33.256193   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:33.753896   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:33.753930   35235 round_trippers.go:469] Request Headers:
I0621 18:42:33.753937   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:33.753940   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:33.756411   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:34.253071   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:34.253105   35235 round_trippers.go:469] Request Headers:
I0621 18:42:34.253113   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:34.253116   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:34.255378   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:34.754073   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:34.754104   35235 round_trippers.go:469] Request Headers:
I0621 18:42:34.754112   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:34.754117   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:34.756791   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:35.253138   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:35.253166   35235 round_trippers.go:469] Request Headers:
I0621 18:42:35.253176   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:35.253181   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:35.255680   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:35.255791   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:35.753769   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:35.753793   35235 round_trippers.go:469] Request Headers:
I0621 18:42:35.753821   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:35.753828   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:35.756205   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:36.253942   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:36.253972   35235 round_trippers.go:469] Request Headers:
I0621 18:42:36.253985   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:36.253990   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:36.256241   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:36.753958   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:36.753982   35235 round_trippers.go:469] Request Headers:
I0621 18:42:36.754006   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:36.754013   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:36.756337   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:37.254108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:37.254134   35235 round_trippers.go:469] Request Headers:
I0621 18:42:37.254148   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:37.254152   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:37.256697   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:37.256821   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:37.753346   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:37.753370   35235 round_trippers.go:469] Request Headers:
I0621 18:42:37.753378   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:37.753383   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:37.755503   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:38.253147   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:38.253172   35235 round_trippers.go:469] Request Headers:
I0621 18:42:38.253182   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:38.253186   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:38.256886   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:42:38.753274   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:38.753305   35235 round_trippers.go:469] Request Headers:
I0621 18:42:38.753315   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:38.753322   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:38.755756   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:39.253414   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:39.253441   35235 round_trippers.go:469] Request Headers:
I0621 18:42:39.253449   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:39.253454   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:39.256586   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:42:39.753328   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:39.753366   35235 round_trippers.go:469] Request Headers:
I0621 18:42:39.753374   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:39.753380   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:39.755869   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:39.755974   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:40.253555   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:40.253577   35235 round_trippers.go:469] Request Headers:
I0621 18:42:40.253585   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:40.253589   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:40.255802   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:40.753689   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:40.753711   35235 round_trippers.go:469] Request Headers:
I0621 18:42:40.753720   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:40.753724   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:40.756155   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:41.253945   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:41.253969   35235 round_trippers.go:469] Request Headers:
I0621 18:42:41.253978   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:41.253984   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:41.256566   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:41.753259   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:41.753284   35235 round_trippers.go:469] Request Headers:
I0621 18:42:41.753292   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:41.753296   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:41.756013   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:41.756172   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:42.253766   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:42.253789   35235 round_trippers.go:469] Request Headers:
I0621 18:42:42.253805   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:42.253811   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:42.256327   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:42.753105   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:42.753127   35235 round_trippers.go:469] Request Headers:
I0621 18:42:42.753137   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:42.753141   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:42.755495   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:43.253158   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:43.253179   35235 round_trippers.go:469] Request Headers:
I0621 18:42:43.253187   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:43.253192   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:43.255316   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:43.754058   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:43.754079   35235 round_trippers.go:469] Request Headers:
I0621 18:42:43.754087   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:43.754090   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:43.756779   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:43.756888   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:44.253472   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:44.253494   35235 round_trippers.go:469] Request Headers:
I0621 18:42:44.253503   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:44.253506   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:44.256311   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:44.754068   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:44.754088   35235 round_trippers.go:469] Request Headers:
I0621 18:42:44.754095   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:44.754099   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:44.756462   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:45.253132   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:45.253163   35235 round_trippers.go:469] Request Headers:
I0621 18:42:45.253173   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:45.253177   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:45.255775   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:45.753992   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:45.754022   35235 round_trippers.go:469] Request Headers:
I0621 18:42:45.754033   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:45.754039   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:45.756508   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:46.253201   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:46.253222   35235 round_trippers.go:469] Request Headers:
I0621 18:42:46.253228   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:46.253233   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:46.255332   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:46.255455   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:46.754119   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:46.754140   35235 round_trippers.go:469] Request Headers:
I0621 18:42:46.754147   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:46.754150   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:46.757068   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:47.253888   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:47.253912   35235 round_trippers.go:469] Request Headers:
I0621 18:42:47.253921   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:47.253930   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:47.256903   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:47.753583   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:47.753605   35235 round_trippers.go:469] Request Headers:
I0621 18:42:47.753611   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:47.753615   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:47.756074   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:48.253811   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:48.253833   35235 round_trippers.go:469] Request Headers:
I0621 18:42:48.253844   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:48.253850   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:48.256655   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:48.256749   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:48.753312   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:48.753336   35235 round_trippers.go:469] Request Headers:
I0621 18:42:48.753345   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:48.753349   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:48.755629   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:49.253237   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:49.253260   35235 round_trippers.go:469] Request Headers:
I0621 18:42:49.253270   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:49.253274   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:49.255503   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:49.753184   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:49.753205   35235 round_trippers.go:469] Request Headers:
I0621 18:42:49.753213   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:49.753218   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:49.756006   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:50.253818   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:50.253844   35235 round_trippers.go:469] Request Headers:
I0621 18:42:50.253856   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:50.253862   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:50.256953   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:42:50.257059   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:50.754033   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:50.754054   35235 round_trippers.go:469] Request Headers:
I0621 18:42:50.754062   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:50.754066   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:50.756622   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:51.253295   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:51.253316   35235 round_trippers.go:469] Request Headers:
I0621 18:42:51.253324   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:51.253327   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:51.255813   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:51.753510   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:51.753533   35235 round_trippers.go:469] Request Headers:
I0621 18:42:51.753541   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:51.753544   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:51.755825   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:52.253506   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:52.253528   35235 round_trippers.go:469] Request Headers:
I0621 18:42:52.253535   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:52.253539   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:52.255863   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:52.753660   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:52.753681   35235 round_trippers.go:469] Request Headers:
I0621 18:42:52.753688   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:52.753692   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:52.756168   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:52.756259   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:53.253472   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:53.253494   35235 round_trippers.go:469] Request Headers:
I0621 18:42:53.253503   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:53.253511   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:53.256126   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:53.753943   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:53.753965   35235 round_trippers.go:469] Request Headers:
I0621 18:42:53.753972   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:53.753976   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:53.756180   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:54.253977   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:54.254000   35235 round_trippers.go:469] Request Headers:
I0621 18:42:54.254008   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:54.254011   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:54.257279   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:42:54.753658   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:54.753688   35235 round_trippers.go:469] Request Headers:
I0621 18:42:54.753698   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:54.753704   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:54.756429   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:54.756533   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:55.253133   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:55.253154   35235 round_trippers.go:469] Request Headers:
I0621 18:42:55.253162   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:55.253166   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:55.255548   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:55.753272   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:55.753294   35235 round_trippers.go:469] Request Headers:
I0621 18:42:55.753301   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:55.753306   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:55.755515   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:56.253219   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:56.253239   35235 round_trippers.go:469] Request Headers:
I0621 18:42:56.253246   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:56.253252   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:56.255877   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:56.753551   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:56.753574   35235 round_trippers.go:469] Request Headers:
I0621 18:42:56.753581   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:56.753585   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:56.756745   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:42:56.756925   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:57.253505   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:57.253529   35235 round_trippers.go:469] Request Headers:
I0621 18:42:57.253541   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:57.253548   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:57.255986   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:57.753791   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:57.753842   35235 round_trippers.go:469] Request Headers:
I0621 18:42:57.753852   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:57.753856   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:57.757122   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:42:58.253959   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:58.253982   35235 round_trippers.go:469] Request Headers:
I0621 18:42:58.253990   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:58.253995   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:58.256342   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:58.754111   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:58.754137   35235 round_trippers.go:469] Request Headers:
I0621 18:42:58.754145   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:58.754148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:58.756826   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:59.253496   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:59.253517   35235 round_trippers.go:469] Request Headers:
I0621 18:42:59.253525   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:59.253528   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:59.255815   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:42:59.255919   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:42:59.753196   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:42:59.753218   35235 round_trippers.go:469] Request Headers:
I0621 18:42:59.753225   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:42:59.753228   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:42:59.756927   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:43:00.253645   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:00.253673   35235 round_trippers.go:469] Request Headers:
I0621 18:43:00.253682   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:00.253685   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:00.256727   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:43:00.753832   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:00.753860   35235 round_trippers.go:469] Request Headers:
I0621 18:43:00.753871   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:00.753877   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:00.757381   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:43:01.254063   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:01.254085   35235 round_trippers.go:469] Request Headers:
I0621 18:43:01.254092   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:01.254097   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:01.256220   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:01.256318   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:01.753941   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:01.753973   35235 round_trippers.go:469] Request Headers:
I0621 18:43:01.753985   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:01.753990   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:01.756534   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:02.253243   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:02.253273   35235 round_trippers.go:469] Request Headers:
I0621 18:43:02.253281   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:02.253284   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:02.255769   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:02.753560   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:02.753584   35235 round_trippers.go:469] Request Headers:
I0621 18:43:02.753591   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:02.753596   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:02.756335   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:03.254108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:03.254137   35235 round_trippers.go:469] Request Headers:
I0621 18:43:03.254145   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:03.254148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:03.256538   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:03.256640   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:03.753199   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:03.753251   35235 round_trippers.go:469] Request Headers:
I0621 18:43:03.753265   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:03.753272   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:03.755656   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:04.253292   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:04.253312   35235 round_trippers.go:469] Request Headers:
I0621 18:43:04.253320   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:04.253324   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:04.255471   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:04.753157   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:04.753179   35235 round_trippers.go:469] Request Headers:
I0621 18:43:04.753186   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:04.753191   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:04.755591   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:05.253259   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:05.253280   35235 round_trippers.go:469] Request Headers:
I0621 18:43:05.253287   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:05.253292   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:05.256074   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:05.753086   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:05.753109   35235 round_trippers.go:469] Request Headers:
I0621 18:43:05.753116   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:05.753120   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:05.755731   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:05.755839   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:06.253429   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:06.253464   35235 round_trippers.go:469] Request Headers:
I0621 18:43:06.253472   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:06.253476   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:06.255749   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:06.753405   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:06.753451   35235 round_trippers.go:469] Request Headers:
I0621 18:43:06.753458   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:06.753462   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:06.756151   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:07.253952   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:07.253973   35235 round_trippers.go:469] Request Headers:
I0621 18:43:07.253981   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:07.253983   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:07.256319   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:07.754096   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:07.754123   35235 round_trippers.go:469] Request Headers:
I0621 18:43:07.754138   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:07.754148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:07.757338   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:43:07.757461   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:08.254099   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:08.254121   35235 round_trippers.go:469] Request Headers:
I0621 18:43:08.254129   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:08.254133   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:08.256774   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:08.753440   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:08.753462   35235 round_trippers.go:469] Request Headers:
I0621 18:43:08.753469   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:08.753474   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:08.756358   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:09.254096   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:09.254117   35235 round_trippers.go:469] Request Headers:
I0621 18:43:09.254125   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:09.254129   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:09.256429   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:09.753127   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:09.753150   35235 round_trippers.go:469] Request Headers:
I0621 18:43:09.753161   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:09.753167   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:09.755586   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:10.253272   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:10.253294   35235 round_trippers.go:469] Request Headers:
I0621 18:43:10.253302   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:10.253306   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:10.255631   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:10.255739   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:10.753668   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:10.753696   35235 round_trippers.go:469] Request Headers:
I0621 18:43:10.753706   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:10.753713   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:10.756201   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:11.253962   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:11.253985   35235 round_trippers.go:469] Request Headers:
I0621 18:43:11.253993   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:11.253997   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:11.256834   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:11.753498   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:11.753530   35235 round_trippers.go:469] Request Headers:
I0621 18:43:11.753538   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:11.753541   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:11.756002   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:12.253852   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:12.253878   35235 round_trippers.go:469] Request Headers:
I0621 18:43:12.253889   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:12.253894   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:12.255623   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I0621 18:43:12.753348   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:12.753368   35235 round_trippers.go:469] Request Headers:
I0621 18:43:12.753376   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:12.753380   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:12.756773   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:43:12.756924   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:13.253240   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:13.253269   35235 round_trippers.go:469] Request Headers:
I0621 18:43:13.253279   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:13.253283   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:13.255681   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:13.753478   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:13.753514   35235 round_trippers.go:469] Request Headers:
I0621 18:43:13.753525   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:13.753529   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:13.755934   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:14.253664   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:14.253691   35235 round_trippers.go:469] Request Headers:
I0621 18:43:14.253702   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:14.253708   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:14.255944   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:14.753658   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:14.753690   35235 round_trippers.go:469] Request Headers:
I0621 18:43:14.753701   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:14.753708   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:14.756145   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:15.253911   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:15.253939   35235 round_trippers.go:469] Request Headers:
I0621 18:43:15.253950   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:15.253955   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:15.256242   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:15.256332   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:15.753142   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:15.753171   35235 round_trippers.go:469] Request Headers:
I0621 18:43:15.753192   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:15.753198   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:15.755492   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:16.253211   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:16.253233   35235 round_trippers.go:469] Request Headers:
I0621 18:43:16.253241   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:16.253245   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:16.255511   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:16.753200   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:16.753230   35235 round_trippers.go:469] Request Headers:
I0621 18:43:16.753241   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:16.753247   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:16.755576   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:17.253273   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:17.253302   35235 round_trippers.go:469] Request Headers:
I0621 18:43:17.253311   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:17.253318   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:17.255913   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:17.753621   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:17.753649   35235 round_trippers.go:469] Request Headers:
I0621 18:43:17.753659   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:17.753663   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:17.756926   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:43:17.757048   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:18.253566   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:18.253589   35235 round_trippers.go:469] Request Headers:
I0621 18:43:18.253597   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:18.253602   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:18.255644   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:18.753408   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:18.753435   35235 round_trippers.go:469] Request Headers:
I0621 18:43:18.753446   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:18.753454   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:18.756037   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:19.253726   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:19.253747   35235 round_trippers.go:469] Request Headers:
I0621 18:43:19.253754   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:19.253757   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:19.255901   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:19.753588   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:19.753610   35235 round_trippers.go:469] Request Headers:
I0621 18:43:19.753618   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:19.753625   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:19.756088   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:20.253881   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:20.253910   35235 round_trippers.go:469] Request Headers:
I0621 18:43:20.253924   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:20.253953   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:20.256596   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:20.256721   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:20.753382   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:20.753404   35235 round_trippers.go:469] Request Headers:
I0621 18:43:20.753413   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:20.753418   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:20.756358   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:21.254088   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:21.254110   35235 round_trippers.go:469] Request Headers:
I0621 18:43:21.254121   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:21.254126   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:21.256303   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:21.754081   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:21.754108   35235 round_trippers.go:469] Request Headers:
I0621 18:43:21.754124   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:21.754131   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:21.757208   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:43:22.253974   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:22.254000   35235 round_trippers.go:469] Request Headers:
I0621 18:43:22.254012   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:22.254018   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:22.256304   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:22.754129   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:22.754151   35235 round_trippers.go:469] Request Headers:
I0621 18:43:22.754163   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:22.754169   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:22.756500   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:22.756606   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:23.253946   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:23.253971   35235 round_trippers.go:469] Request Headers:
I0621 18:43:23.253982   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:23.253987   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:23.256653   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:23.753315   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:23.753339   35235 round_trippers.go:469] Request Headers:
I0621 18:43:23.753351   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:23.753356   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:23.755944   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:24.253606   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:24.253631   35235 round_trippers.go:469] Request Headers:
I0621 18:43:24.253642   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:24.253648   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:24.256093   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:24.753882   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:24.753906   35235 round_trippers.go:469] Request Headers:
I0621 18:43:24.753917   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:24.753925   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:24.756558   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:24.756656   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:25.253213   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:25.253248   35235 round_trippers.go:469] Request Headers:
I0621 18:43:25.253270   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:25.253277   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:25.255472   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:25.753250   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:25.753272   35235 round_trippers.go:469] Request Headers:
I0621 18:43:25.753279   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:25.753282   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:25.755573   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:26.253253   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:26.253279   35235 round_trippers.go:469] Request Headers:
I0621 18:43:26.253287   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:26.253293   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:26.256024   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:26.753826   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:26.753845   35235 round_trippers.go:469] Request Headers:
I0621 18:43:26.753854   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:26.753858   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:26.755913   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:27.253577   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:27.253603   35235 round_trippers.go:469] Request Headers:
I0621 18:43:27.253612   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:27.253616   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:27.256165   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:27.256300   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:27.753976   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:27.754001   35235 round_trippers.go:469] Request Headers:
I0621 18:43:27.754010   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:27.754014   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:27.756115   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:28.253925   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:28.253947   35235 round_trippers.go:469] Request Headers:
I0621 18:43:28.253955   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:28.253965   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:28.256436   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:28.753133   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:28.753157   35235 round_trippers.go:469] Request Headers:
I0621 18:43:28.753165   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:28.753170   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:28.755397   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:29.253099   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:29.253122   35235 round_trippers.go:469] Request Headers:
I0621 18:43:29.253129   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:29.253135   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:29.256178   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:43:29.753984   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:29.754006   35235 round_trippers.go:469] Request Headers:
I0621 18:43:29.754022   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:29.754026   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:29.755897   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I0621 18:43:29.756008   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:30.254099   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:30.254122   35235 round_trippers.go:469] Request Headers:
I0621 18:43:30.254130   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:30.254134   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:30.256362   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:30.754136   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:30.754157   35235 round_trippers.go:469] Request Headers:
I0621 18:43:30.754165   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:30.754170   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:30.756422   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:31.254116   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:31.254138   35235 round_trippers.go:469] Request Headers:
I0621 18:43:31.254146   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:31.254150   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:31.256221   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:31.753960   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:31.753982   35235 round_trippers.go:469] Request Headers:
I0621 18:43:31.753990   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:31.753995   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:31.756200   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:31.756313   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:32.253983   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:32.254005   35235 round_trippers.go:469] Request Headers:
I0621 18:43:32.254013   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:32.254017   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:32.256078   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:32.753997   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:32.754018   35235 round_trippers.go:469] Request Headers:
I0621 18:43:32.754028   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:32.754035   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:32.756287   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:33.254048   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:33.254069   35235 round_trippers.go:469] Request Headers:
I0621 18:43:33.254076   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:33.254079   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:33.256373   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:33.754131   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:33.754156   35235 round_trippers.go:469] Request Headers:
I0621 18:43:33.754164   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:33.754171   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:33.756488   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:33.756588   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:34.253166   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:34.253190   35235 round_trippers.go:469] Request Headers:
I0621 18:43:34.253199   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:34.253203   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:34.255400   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:34.753125   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:34.753154   35235 round_trippers.go:469] Request Headers:
I0621 18:43:34.753163   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:34.753168   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:34.755457   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:35.253151   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:35.253178   35235 round_trippers.go:469] Request Headers:
I0621 18:43:35.253187   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:35.253191   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:35.256046   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:35.754110   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:35.754165   35235 round_trippers.go:469] Request Headers:
I0621 18:43:35.754179   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:35.754185   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:35.756571   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:35.756693   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:36.253235   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:36.253260   35235 round_trippers.go:469] Request Headers:
I0621 18:43:36.253270   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:36.253276   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:36.255776   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:36.753435   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:36.753457   35235 round_trippers.go:469] Request Headers:
I0621 18:43:36.753469   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:36.753478   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:36.755864   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:37.253530   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:37.253558   35235 round_trippers.go:469] Request Headers:
I0621 18:43:37.253569   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:37.253575   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:37.255768   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:37.753419   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:37.753447   35235 round_trippers.go:469] Request Headers:
I0621 18:43:37.753458   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:37.753463   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:37.755842   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:38.253318   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:38.253343   35235 round_trippers.go:469] Request Headers:
I0621 18:43:38.253355   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:38.253362   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:38.255755   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:38.255877   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:38.753477   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:38.753504   35235 round_trippers.go:469] Request Headers:
I0621 18:43:38.753512   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:38.753517   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:38.755767   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:39.253430   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:39.253450   35235 round_trippers.go:469] Request Headers:
I0621 18:43:39.253457   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:39.253463   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:39.255589   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:39.753233   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:39.753260   35235 round_trippers.go:469] Request Headers:
I0621 18:43:39.753270   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:39.753276   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:39.755668   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:40.253355   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:40.253390   35235 round_trippers.go:469] Request Headers:
I0621 18:43:40.253401   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:40.253406   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:40.255839   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:40.255989   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:40.753697   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:40.753717   35235 round_trippers.go:469] Request Headers:
I0621 18:43:40.753724   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:40.753727   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:40.756179   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:41.253951   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:41.253973   35235 round_trippers.go:469] Request Headers:
I0621 18:43:41.253981   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:41.253986   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:41.256552   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:41.753265   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:41.753288   35235 round_trippers.go:469] Request Headers:
I0621 18:43:41.753296   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:41.753303   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:41.755598   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:42.253276   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:42.253300   35235 round_trippers.go:469] Request Headers:
I0621 18:43:42.253308   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:42.253312   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:42.255651   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:42.753497   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:42.753521   35235 round_trippers.go:469] Request Headers:
I0621 18:43:42.753530   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:42.753535   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:42.756468   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:42.756599   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:43.253154   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:43.253180   35235 round_trippers.go:469] Request Headers:
I0621 18:43:43.253190   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:43.253195   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:43.255537   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:43.753238   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:43.753265   35235 round_trippers.go:469] Request Headers:
I0621 18:43:43.753277   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:43.753282   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:43.755936   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:44.253576   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:44.253596   35235 round_trippers.go:469] Request Headers:
I0621 18:43:44.253602   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:44.253605   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:44.255821   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:44.753231   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:44.753254   35235 round_trippers.go:469] Request Headers:
I0621 18:43:44.753261   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:44.753267   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:44.755628   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:45.253355   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:45.253388   35235 round_trippers.go:469] Request Headers:
I0621 18:43:45.253398   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:45.253403   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:45.255498   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:45.255599   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:45.753559   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:45.753581   35235 round_trippers.go:469] Request Headers:
I0621 18:43:45.753588   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:45.753592   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:45.755971   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:46.253637   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:46.253659   35235 round_trippers.go:469] Request Headers:
I0621 18:43:46.253667   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:46.253670   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:46.255870   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:46.753524   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:46.753546   35235 round_trippers.go:469] Request Headers:
I0621 18:43:46.753553   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:46.753558   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:46.755816   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:47.253503   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:47.253527   35235 round_trippers.go:469] Request Headers:
I0621 18:43:47.253535   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:47.253539   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:47.255982   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:47.256080   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:47.753719   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:47.753741   35235 round_trippers.go:469] Request Headers:
I0621 18:43:47.753747   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:47.753751   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:47.756084   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:48.253863   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:48.253882   35235 round_trippers.go:469] Request Headers:
I0621 18:43:48.253890   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:48.253895   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:48.256321   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:48.754097   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:48.754125   35235 round_trippers.go:469] Request Headers:
I0621 18:43:48.754133   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:48.754137   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:48.756772   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:49.253414   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:49.253435   35235 round_trippers.go:469] Request Headers:
I0621 18:43:49.253443   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:49.253447   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:49.256024   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:49.256114   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:49.753782   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:49.753817   35235 round_trippers.go:469] Request Headers:
I0621 18:43:49.753826   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:49.753830   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:49.756294   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:50.254038   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:50.254062   35235 round_trippers.go:469] Request Headers:
I0621 18:43:50.254071   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:50.254079   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:50.256503   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:50.753420   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:50.753444   35235 round_trippers.go:469] Request Headers:
I0621 18:43:50.753456   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:50.753461   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:50.755767   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:51.253472   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:51.253498   35235 round_trippers.go:469] Request Headers:
I0621 18:43:51.253504   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:51.253508   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:51.255753   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:51.754121   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:51.754148   35235 round_trippers.go:469] Request Headers:
I0621 18:43:51.754160   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:51.754169   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:51.756676   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:51.756799   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:52.253316   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:52.253345   35235 round_trippers.go:469] Request Headers:
I0621 18:43:52.253355   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:52.253362   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:52.255773   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:52.753500   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:52.753535   35235 round_trippers.go:469] Request Headers:
I0621 18:43:52.753543   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:52.753547   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:52.755866   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:53.253575   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:53.253595   35235 round_trippers.go:469] Request Headers:
I0621 18:43:53.253603   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:53.253606   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:53.255800   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:53.753469   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:53.753497   35235 round_trippers.go:469] Request Headers:
I0621 18:43:53.753507   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:53.753512   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:53.755769   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:54.253422   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:54.253443   35235 round_trippers.go:469] Request Headers:
I0621 18:43:54.253451   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:54.253454   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:54.255615   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:54.255730   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:54.753348   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:54.753371   35235 round_trippers.go:469] Request Headers:
I0621 18:43:54.753379   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:54.753384   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:54.756006   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:55.253765   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:55.253816   35235 round_trippers.go:469] Request Headers:
I0621 18:43:55.253832   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:55.253837   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:55.256102   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:55.753196   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:55.753224   35235 round_trippers.go:469] Request Headers:
I0621 18:43:55.753235   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:55.753240   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:55.755510   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:56.253249   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:56.253281   35235 round_trippers.go:469] Request Headers:
I0621 18:43:56.253295   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:56.253303   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:56.256169   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:56.256296   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:56.753979   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:56.753999   35235 round_trippers.go:469] Request Headers:
I0621 18:43:56.754006   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:56.754011   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:56.756516   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:57.253169   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:57.253189   35235 round_trippers.go:469] Request Headers:
I0621 18:43:57.253196   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:57.253202   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:57.255709   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:57.753378   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:57.753400   35235 round_trippers.go:469] Request Headers:
I0621 18:43:57.753407   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:57.753411   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:57.756612   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:43:58.253258   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:58.253290   35235 round_trippers.go:469] Request Headers:
I0621 18:43:58.253296   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:58.253299   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:58.255806   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:58.753454   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:58.753477   35235 round_trippers.go:469] Request Headers:
I0621 18:43:58.753485   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:58.753493   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:58.755850   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:58.755983   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:43:59.253493   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:59.253514   35235 round_trippers.go:469] Request Headers:
I0621 18:43:59.253522   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:59.253525   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:59.255828   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:43:59.753511   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:43:59.753533   35235 round_trippers.go:469] Request Headers:
I0621 18:43:59.753541   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:43:59.753544   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:43:59.755791   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:00.253485   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:00.253513   35235 round_trippers.go:469] Request Headers:
I0621 18:44:00.253521   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:00.253526   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:00.256129   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:00.753738   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:00.753764   35235 round_trippers.go:469] Request Headers:
I0621 18:44:00.753772   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:00.753776   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:00.756401   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:00.756523   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:01.254111   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:01.254135   35235 round_trippers.go:469] Request Headers:
I0621 18:44:01.254143   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:01.254147   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:01.256190   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:01.754048   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:01.754075   35235 round_trippers.go:469] Request Headers:
I0621 18:44:01.754082   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:01.754086   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:01.756494   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:02.253216   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:02.253241   35235 round_trippers.go:469] Request Headers:
I0621 18:44:02.253252   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:02.253260   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:02.255453   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:02.753135   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:02.753158   35235 round_trippers.go:469] Request Headers:
I0621 18:44:02.753166   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:02.753171   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:02.755390   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:03.254113   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:03.254136   35235 round_trippers.go:469] Request Headers:
I0621 18:44:03.254144   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:03.254148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:03.256256   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:03.256480   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:03.754075   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:03.754100   35235 round_trippers.go:469] Request Headers:
I0621 18:44:03.754111   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:03.754118   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:03.756529   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:04.253240   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:04.253263   35235 round_trippers.go:469] Request Headers:
I0621 18:44:04.253270   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:04.253275   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:04.255430   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:04.753150   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:04.753176   35235 round_trippers.go:469] Request Headers:
I0621 18:44:04.753189   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:04.753195   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:04.755405   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:05.253064   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:05.253119   35235 round_trippers.go:469] Request Headers:
I0621 18:44:05.253131   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:05.253136   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:05.256296   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:44:05.256529   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:05.753351   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:05.753374   35235 round_trippers.go:469] Request Headers:
I0621 18:44:05.753383   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:05.753387   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:05.755749   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:06.253439   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:06.253462   35235 round_trippers.go:469] Request Headers:
I0621 18:44:06.253474   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:06.253479   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:06.256427   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:06.753144   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:06.753166   35235 round_trippers.go:469] Request Headers:
I0621 18:44:06.753177   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:06.753183   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:06.755697   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:07.253393   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:07.253418   35235 round_trippers.go:469] Request Headers:
I0621 18:44:07.253428   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:07.253434   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:07.255527   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:07.753211   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:07.753240   35235 round_trippers.go:469] Request Headers:
I0621 18:44:07.753248   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:07.753251   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:07.755438   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:07.755544   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:08.253146   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:08.253169   35235 round_trippers.go:469] Request Headers:
I0621 18:44:08.253180   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:08.253186   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:08.257404   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I0621 18:44:08.754150   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:08.754174   35235 round_trippers.go:469] Request Headers:
I0621 18:44:08.754185   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:08.754190   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:08.756461   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:09.253177   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:09.253205   35235 round_trippers.go:469] Request Headers:
I0621 18:44:09.253212   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:09.253217   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:09.255685   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:09.753345   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:09.753363   35235 round_trippers.go:469] Request Headers:
I0621 18:44:09.753374   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:09.753381   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:09.755703   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:09.755818   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:10.253233   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:10.253256   35235 round_trippers.go:469] Request Headers:
I0621 18:44:10.253264   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:10.253268   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:10.255460   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:10.753414   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:10.753434   35235 round_trippers.go:469] Request Headers:
I0621 18:44:10.753441   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:10.753446   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:10.756028   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:11.253831   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:11.253855   35235 round_trippers.go:469] Request Headers:
I0621 18:44:11.253864   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:11.253868   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:11.256408   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:11.753114   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:11.753141   35235 round_trippers.go:469] Request Headers:
I0621 18:44:11.753151   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:11.753155   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:11.755503   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:12.253193   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:12.253220   35235 round_trippers.go:469] Request Headers:
I0621 18:44:12.253228   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:12.253232   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:12.255424   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:12.255520   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:12.753933   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:12.753952   35235 round_trippers.go:469] Request Headers:
I0621 18:44:12.753965   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:12.753969   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:12.756711   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:13.253381   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:13.253409   35235 round_trippers.go:469] Request Headers:
I0621 18:44:13.253416   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:13.253422   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:13.256041   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:13.753786   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:13.753822   35235 round_trippers.go:469] Request Headers:
I0621 18:44:13.753833   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:13.753837   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:13.755942   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:14.253592   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:14.253615   35235 round_trippers.go:469] Request Headers:
I0621 18:44:14.253622   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:14.253626   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:14.256403   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:14.256498   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:14.753105   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:14.753127   35235 round_trippers.go:469] Request Headers:
I0621 18:44:14.753135   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:14.753138   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:14.755470   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:15.253125   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:15.253146   35235 round_trippers.go:469] Request Headers:
I0621 18:44:15.253153   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:15.253157   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:15.255470   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:15.753440   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:15.753464   35235 round_trippers.go:469] Request Headers:
I0621 18:44:15.753474   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:15.753479   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:15.757073   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:44:16.253853   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:16.253872   35235 round_trippers.go:469] Request Headers:
I0621 18:44:16.253880   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:16.253884   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:16.256131   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:16.753972   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:16.753996   35235 round_trippers.go:469] Request Headers:
I0621 18:44:16.754003   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:16.754006   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:16.756320   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:16.756430   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:17.254075   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:17.254102   35235 round_trippers.go:469] Request Headers:
I0621 18:44:17.254111   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:17.254114   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:17.256665   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:17.753372   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:17.753403   35235 round_trippers.go:469] Request Headers:
I0621 18:44:17.753414   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:17.753418   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:17.755677   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:18.253370   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:18.253394   35235 round_trippers.go:469] Request Headers:
I0621 18:44:18.253401   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:18.253407   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:18.255899   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:18.753459   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:18.753481   35235 round_trippers.go:469] Request Headers:
I0621 18:44:18.753489   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:18.753493   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:18.756430   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:18.756533   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:19.253235   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:19.253264   35235 round_trippers.go:469] Request Headers:
I0621 18:44:19.253275   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:19.253281   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:19.255426   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:19.753102   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:19.753128   35235 round_trippers.go:469] Request Headers:
I0621 18:44:19.753142   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:19.753146   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:19.755881   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:20.253619   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:20.253653   35235 round_trippers.go:469] Request Headers:
I0621 18:44:20.253664   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:20.253672   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:20.255868   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:20.753704   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:20.753726   35235 round_trippers.go:469] Request Headers:
I0621 18:44:20.753733   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:20.753737   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:20.756139   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:21.253737   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:21.253760   35235 round_trippers.go:469] Request Headers:
I0621 18:44:21.253766   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:21.253770   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:21.255914   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:21.256015   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:21.753638   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:21.753666   35235 round_trippers.go:469] Request Headers:
I0621 18:44:21.753677   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:21.753683   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:21.756099   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:22.254063   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:22.254084   35235 round_trippers.go:469] Request Headers:
I0621 18:44:22.254093   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:22.254099   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:22.256675   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:22.753446   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:22.753490   35235 round_trippers.go:469] Request Headers:
I0621 18:44:22.753498   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:22.753521   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:22.755891   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:23.253572   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:23.253594   35235 round_trippers.go:469] Request Headers:
I0621 18:44:23.253602   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:23.253607   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:23.255945   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:23.256068   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:23.753627   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:23.753649   35235 round_trippers.go:469] Request Headers:
I0621 18:44:23.753657   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:23.753660   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:23.756098   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:24.253879   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:24.253903   35235 round_trippers.go:469] Request Headers:
I0621 18:44:24.253928   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:24.253933   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:24.255876   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I0621 18:44:24.753548   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:24.753569   35235 round_trippers.go:469] Request Headers:
I0621 18:44:24.753578   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:24.753583   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:24.756004   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:25.253845   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:25.253870   35235 round_trippers.go:469] Request Headers:
I0621 18:44:25.253878   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:25.253881   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:25.256229   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:25.256332   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:25.753201   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:25.753222   35235 round_trippers.go:469] Request Headers:
I0621 18:44:25.753230   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:25.753235   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:25.755778   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:26.253532   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:26.253560   35235 round_trippers.go:469] Request Headers:
I0621 18:44:26.253572   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:26.253579   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:26.256011   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:26.753500   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:26.753525   35235 round_trippers.go:469] Request Headers:
I0621 18:44:26.753537   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:26.753542   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:26.755797   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:27.253471   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:27.253497   35235 round_trippers.go:469] Request Headers:
I0621 18:44:27.253505   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:27.253511   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:27.255826   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:27.753539   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:27.753565   35235 round_trippers.go:469] Request Headers:
I0621 18:44:27.753575   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:27.753579   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:27.756102   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:27.756216   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:28.253894   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:28.253920   35235 round_trippers.go:469] Request Headers:
I0621 18:44:28.253932   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:28.253938   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:28.256388   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:28.753678   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:28.753709   35235 round_trippers.go:469] Request Headers:
I0621 18:44:28.753718   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:28.753722   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:28.756027   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:29.253758   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:29.253784   35235 round_trippers.go:469] Request Headers:
I0621 18:44:29.253793   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:29.253814   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:29.256028   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:29.753737   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:29.753760   35235 round_trippers.go:469] Request Headers:
I0621 18:44:29.753768   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:29.753771   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:29.756179   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:29.756294   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:30.253915   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:30.253942   35235 round_trippers.go:469] Request Headers:
I0621 18:44:30.253957   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:30.253962   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:30.256414   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:30.753479   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:30.753500   35235 round_trippers.go:469] Request Headers:
I0621 18:44:30.753509   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:30.753515   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:30.756407   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:31.254125   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:31.254147   35235 round_trippers.go:469] Request Headers:
I0621 18:44:31.254156   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:31.254160   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:31.256213   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:31.753958   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:31.753983   35235 round_trippers.go:469] Request Headers:
I0621 18:44:31.753991   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:31.753997   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:31.756682   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:31.756791   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:32.253389   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:32.253412   35235 round_trippers.go:469] Request Headers:
I0621 18:44:32.253423   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:32.253427   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:32.256484   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:44:32.753165   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:32.753190   35235 round_trippers.go:469] Request Headers:
I0621 18:44:32.753202   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:32.753209   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:32.755553   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:33.253228   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:33.253249   35235 round_trippers.go:469] Request Headers:
I0621 18:44:33.253264   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:33.253271   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:33.255694   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:33.753130   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:33.753157   35235 round_trippers.go:469] Request Headers:
I0621 18:44:33.753166   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:33.753174   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:33.755727   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:34.253411   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:34.253435   35235 round_trippers.go:469] Request Headers:
I0621 18:44:34.253442   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:34.253447   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:34.255741   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:34.255854   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:34.753417   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:34.753442   35235 round_trippers.go:469] Request Headers:
I0621 18:44:34.753454   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:34.753459   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:34.756164   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:35.253746   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:35.253769   35235 round_trippers.go:469] Request Headers:
I0621 18:44:35.253781   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:35.253785   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:35.255949   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:35.753180   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:35.753204   35235 round_trippers.go:469] Request Headers:
I0621 18:44:35.753220   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:35.753224   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:35.755860   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:36.253496   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:36.253537   35235 round_trippers.go:469] Request Headers:
I0621 18:44:36.253544   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:36.253548   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:36.255722   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:36.753441   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:36.753466   35235 round_trippers.go:469] Request Headers:
I0621 18:44:36.753477   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:36.753481   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:36.756306   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:36.756401   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:37.254079   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:37.254100   35235 round_trippers.go:469] Request Headers:
I0621 18:44:37.254107   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:37.254110   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:37.256481   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:37.753199   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:37.753234   35235 round_trippers.go:469] Request Headers:
I0621 18:44:37.753242   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:37.753246   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:37.755800   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:38.253519   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:38.253548   35235 round_trippers.go:469] Request Headers:
I0621 18:44:38.253559   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:38.253567   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:38.256131   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:38.753661   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:38.753683   35235 round_trippers.go:469] Request Headers:
I0621 18:44:38.753691   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:38.753696   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:38.756247   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:39.254003   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:39.254027   35235 round_trippers.go:469] Request Headers:
I0621 18:44:39.254034   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:39.254037   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:39.256345   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:39.256439   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:39.754061   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:39.754081   35235 round_trippers.go:469] Request Headers:
I0621 18:44:39.754089   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:39.754092   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:39.756926   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:40.253621   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:40.253650   35235 round_trippers.go:469] Request Headers:
I0621 18:44:40.253660   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:40.253664   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:40.255986   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:40.754015   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:40.754041   35235 round_trippers.go:469] Request Headers:
I0621 18:44:40.754052   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:40.754060   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:40.756357   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:41.253792   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:41.253822   35235 round_trippers.go:469] Request Headers:
I0621 18:44:41.253830   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:41.253835   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:41.256450   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:41.256576   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:41.753156   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:41.753181   35235 round_trippers.go:469] Request Headers:
I0621 18:44:41.753189   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:41.753192   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:41.755721   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:42.253422   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:42.253448   35235 round_trippers.go:469] Request Headers:
I0621 18:44:42.253456   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:42.253461   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:42.255626   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:42.753398   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:42.753419   35235 round_trippers.go:469] Request Headers:
I0621 18:44:42.753428   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:42.753432   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:42.756145   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:43.253928   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:43.253955   35235 round_trippers.go:469] Request Headers:
I0621 18:44:43.253967   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:43.253971   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:43.256730   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:43.256834   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:43.753403   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:43.753426   35235 round_trippers.go:469] Request Headers:
I0621 18:44:43.753433   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:43.753437   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:43.755806   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:44.253486   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:44.253510   35235 round_trippers.go:469] Request Headers:
I0621 18:44:44.253518   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:44.253523   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:44.256005   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:44.753773   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:44.753822   35235 round_trippers.go:469] Request Headers:
I0621 18:44:44.753832   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:44.753839   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:44.756148   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:45.253938   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:45.253965   35235 round_trippers.go:469] Request Headers:
I0621 18:44:45.253978   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:45.253983   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:45.256332   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:45.753319   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:45.753343   35235 round_trippers.go:469] Request Headers:
I0621 18:44:45.753351   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:45.753355   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:45.755917   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:45.756046   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:46.253601   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:46.253622   35235 round_trippers.go:469] Request Headers:
I0621 18:44:46.253634   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:46.253638   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:46.256124   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:46.753892   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:46.753915   35235 round_trippers.go:469] Request Headers:
I0621 18:44:46.753923   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:46.753926   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:46.756405   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:47.254133   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:47.254159   35235 round_trippers.go:469] Request Headers:
I0621 18:44:47.254183   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:47.254190   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:47.256769   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:47.753417   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:47.753450   35235 round_trippers.go:469] Request Headers:
I0621 18:44:47.753458   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:47.753463   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:47.755930   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:48.253628   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:48.253651   35235 round_trippers.go:469] Request Headers:
I0621 18:44:48.253658   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:48.253663   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:48.255838   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:48.255931   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:48.753538   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:48.753563   35235 round_trippers.go:469] Request Headers:
I0621 18:44:48.753574   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:48.753580   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:48.756631   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:44:49.253251   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:49.253275   35235 round_trippers.go:469] Request Headers:
I0621 18:44:49.253306   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:49.253313   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:49.256044   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:49.753793   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:49.753839   35235 round_trippers.go:469] Request Headers:
I0621 18:44:49.753849   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:49.753855   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:49.756074   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:50.253898   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:50.253922   35235 round_trippers.go:469] Request Headers:
I0621 18:44:50.253932   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:50.253936   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:50.256569   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:50.256731   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:50.753704   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:50.753732   35235 round_trippers.go:469] Request Headers:
I0621 18:44:50.753742   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:50.753748   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:50.756051   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:51.253856   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:51.253881   35235 round_trippers.go:469] Request Headers:
I0621 18:44:51.253889   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:51.253893   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:51.256213   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:51.754024   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:51.754046   35235 round_trippers.go:469] Request Headers:
I0621 18:44:51.754054   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:51.754057   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:51.756688   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:52.253350   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:52.253372   35235 round_trippers.go:469] Request Headers:
I0621 18:44:52.253379   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:52.253382   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:52.255702   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:52.753469   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:52.753492   35235 round_trippers.go:469] Request Headers:
I0621 18:44:52.753500   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:52.753504   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:52.755375   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I0621 18:44:52.755473   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:53.254058   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:53.254079   35235 round_trippers.go:469] Request Headers:
I0621 18:44:53.254086   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:53.254089   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:53.257691   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:44:53.753362   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:53.753384   35235 round_trippers.go:469] Request Headers:
I0621 18:44:53.753392   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:53.753397   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:53.756165   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:54.253900   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:54.253924   35235 round_trippers.go:469] Request Headers:
I0621 18:44:54.253936   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:54.253941   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:54.258836   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I0621 18:44:54.753503   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:54.753531   35235 round_trippers.go:469] Request Headers:
I0621 18:44:54.753543   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:54.753550   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:54.756079   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:54.756230   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:55.253852   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:55.253878   35235 round_trippers.go:469] Request Headers:
I0621 18:44:55.253888   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:55.253893   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:55.257360   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:44:55.753655   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:55.753677   35235 round_trippers.go:469] Request Headers:
I0621 18:44:55.753685   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:55.753690   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:55.755813   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:56.253479   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:56.253502   35235 round_trippers.go:469] Request Headers:
I0621 18:44:56.253510   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:56.253514   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:56.256268   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:56.754037   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:56.754060   35235 round_trippers.go:469] Request Headers:
I0621 18:44:56.754067   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:56.754070   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:56.756632   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:56.756724   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:57.253331   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:57.253354   35235 round_trippers.go:469] Request Headers:
I0621 18:44:57.253366   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:57.253370   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:57.255914   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:57.753607   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:57.753633   35235 round_trippers.go:469] Request Headers:
I0621 18:44:57.753644   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:57.753652   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:57.755812   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:58.253531   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:58.253555   35235 round_trippers.go:469] Request Headers:
I0621 18:44:58.253566   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:58.253572   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:58.255850   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:58.753512   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:58.753538   35235 round_trippers.go:469] Request Headers:
I0621 18:44:58.753549   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:58.753555   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:58.755710   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:59.253408   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:59.253430   35235 round_trippers.go:469] Request Headers:
I0621 18:44:59.253437   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:59.253441   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:59.255930   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:44:59.256041   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:44:59.753599   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:44:59.753627   35235 round_trippers.go:469] Request Headers:
I0621 18:44:59.753638   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:44:59.753645   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:44:59.756229   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:00.253985   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:00.254015   35235 round_trippers.go:469] Request Headers:
I0621 18:45:00.254025   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:00.254032   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:00.256308   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:00.753269   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:00.753302   35235 round_trippers.go:469] Request Headers:
I0621 18:45:00.753313   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:00.753318   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:00.756104   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:01.253837   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:01.253859   35235 round_trippers.go:469] Request Headers:
I0621 18:45:01.253866   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:01.253870   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:01.255961   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:01.256081   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:01.753756   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:01.753780   35235 round_trippers.go:469] Request Headers:
I0621 18:45:01.753788   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:01.753793   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:01.756409   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:02.253106   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:02.253130   35235 round_trippers.go:469] Request Headers:
I0621 18:45:02.253138   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:02.253142   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:02.255833   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:02.753652   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:02.753676   35235 round_trippers.go:469] Request Headers:
I0621 18:45:02.753684   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:02.753689   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:02.756269   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:03.254022   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:03.254046   35235 round_trippers.go:469] Request Headers:
I0621 18:45:03.254054   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:03.254058   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:03.256878   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:03.257002   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:03.753403   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:03.753427   35235 round_trippers.go:469] Request Headers:
I0621 18:45:03.753435   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:03.753439   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:03.756396   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:04.254152   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:04.254175   35235 round_trippers.go:469] Request Headers:
I0621 18:45:04.254183   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:04.254188   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:04.256522   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:04.753243   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:04.753267   35235 round_trippers.go:469] Request Headers:
I0621 18:45:04.753275   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:04.753279   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:04.755884   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:05.253582   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:05.253605   35235 round_trippers.go:469] Request Headers:
I0621 18:45:05.253613   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:05.253616   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:05.256501   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:05.753770   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:05.753809   35235 round_trippers.go:469] Request Headers:
I0621 18:45:05.753820   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:05.753826   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:05.756343   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:05.756444   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:06.254108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:06.254134   35235 round_trippers.go:469] Request Headers:
I0621 18:45:06.254145   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:06.254153   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:06.256487   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:06.753139   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:06.753157   35235 round_trippers.go:469] Request Headers:
I0621 18:45:06.753165   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:06.753169   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:06.755898   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:07.253573   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:07.253597   35235 round_trippers.go:469] Request Headers:
I0621 18:45:07.253605   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:07.253609   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:07.256047   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:07.753861   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:07.753884   35235 round_trippers.go:469] Request Headers:
I0621 18:45:07.753891   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:07.753895   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:07.756234   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:08.254004   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:08.254028   35235 round_trippers.go:469] Request Headers:
I0621 18:45:08.254035   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:08.254039   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:08.256478   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:08.256592   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:08.753176   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:08.753198   35235 round_trippers.go:469] Request Headers:
I0621 18:45:08.753207   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:08.753213   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:08.755734   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:09.253450   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:09.253472   35235 round_trippers.go:469] Request Headers:
I0621 18:45:09.253480   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:09.253484   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:09.257716   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I0621 18:45:09.753430   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:09.753460   35235 round_trippers.go:469] Request Headers:
I0621 18:45:09.753470   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:09.753478   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:09.758419   35235 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
I0621 18:45:10.253123   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:10.253150   35235 round_trippers.go:469] Request Headers:
I0621 18:45:10.253160   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:10.253166   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:10.255214   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:10.754108   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:10.754137   35235 round_trippers.go:469] Request Headers:
I0621 18:45:10.754149   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:10.754154   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:10.756647   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:10.756759   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:11.253341   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:11.253365   35235 round_trippers.go:469] Request Headers:
I0621 18:45:11.253372   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:11.253375   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:11.259819   35235 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
I0621 18:45:11.753498   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:11.753523   35235 round_trippers.go:469] Request Headers:
I0621 18:45:11.753529   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:11.753532   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:11.756024   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:12.253755   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:12.253775   35235 round_trippers.go:469] Request Headers:
I0621 18:45:12.253782   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:12.253785   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:12.255827   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:12.753616   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:12.753642   35235 round_trippers.go:469] Request Headers:
I0621 18:45:12.753653   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:12.753659   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:12.756051   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:13.253856   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:13.253880   35235 round_trippers.go:469] Request Headers:
I0621 18:45:13.253887   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:13.253892   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:13.256135   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:13.256236   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:13.753934   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:13.753958   35235 round_trippers.go:469] Request Headers:
I0621 18:45:13.753965   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:13.753975   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:13.756256   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:14.254028   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:14.254049   35235 round_trippers.go:469] Request Headers:
I0621 18:45:14.254056   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:14.254060   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:14.256641   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:14.753330   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:14.753355   35235 round_trippers.go:469] Request Headers:
I0621 18:45:14.753368   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:14.753375   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:14.756085   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:15.253839   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:15.253861   35235 round_trippers.go:469] Request Headers:
I0621 18:45:15.253869   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:15.253873   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:15.256068   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:15.753228   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:15.753256   35235 round_trippers.go:469] Request Headers:
I0621 18:45:15.753267   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:15.753274   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:15.755958   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:15.756073   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:16.253623   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:16.253648   35235 round_trippers.go:469] Request Headers:
I0621 18:45:16.253660   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:16.253665   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:16.255941   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:16.753611   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:16.753636   35235 round_trippers.go:469] Request Headers:
I0621 18:45:16.753644   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:16.753647   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:16.755948   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:17.253748   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:17.253772   35235 round_trippers.go:469] Request Headers:
I0621 18:45:17.253779   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:17.253782   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:17.256366   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:17.754133   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:17.754157   35235 round_trippers.go:469] Request Headers:
I0621 18:45:17.754164   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:17.754168   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:17.756642   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:17.756751   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:18.253314   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:18.253337   35235 round_trippers.go:469] Request Headers:
I0621 18:45:18.253345   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:18.253349   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:18.255719   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:18.753392   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:18.753415   35235 round_trippers.go:469] Request Headers:
I0621 18:45:18.753422   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:18.753426   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:18.755755   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:19.253431   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:19.253454   35235 round_trippers.go:469] Request Headers:
I0621 18:45:19.253462   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:19.253465   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:19.256052   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:19.753815   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:19.753837   35235 round_trippers.go:469] Request Headers:
I0621 18:45:19.753845   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:19.753848   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:19.756221   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:20.254007   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:20.254037   35235 round_trippers.go:469] Request Headers:
I0621 18:45:20.254050   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:20.254058   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:20.256384   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:20.256490   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:20.753085   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:20.753105   35235 round_trippers.go:469] Request Headers:
I0621 18:45:20.753113   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:20.753117   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:20.755251   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:21.254043   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:21.254069   35235 round_trippers.go:469] Request Headers:
I0621 18:45:21.254079   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:21.254085   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:21.255768   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I0621 18:45:21.753445   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:21.753468   35235 round_trippers.go:469] Request Headers:
I0621 18:45:21.753476   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:21.753484   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:21.759645   35235 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
I0621 18:45:22.253316   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:22.253343   35235 round_trippers.go:469] Request Headers:
I0621 18:45:22.253352   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:22.253357   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:22.255259   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I0621 18:45:22.754058   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:22.754082   35235 round_trippers.go:469] Request Headers:
I0621 18:45:22.754090   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:22.754093   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:22.756412   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:22.756551   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:23.253136   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:23.253160   35235 round_trippers.go:469] Request Headers:
I0621 18:45:23.253168   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:23.253175   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:23.255457   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:23.753140   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:23.753161   35235 round_trippers.go:469] Request Headers:
I0621 18:45:23.753167   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:23.753176   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:23.755402   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:24.253097   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:24.253119   35235 round_trippers.go:469] Request Headers:
I0621 18:45:24.253126   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:24.253130   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:24.256175   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:45:24.753993   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:24.754017   35235 round_trippers.go:469] Request Headers:
I0621 18:45:24.754028   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:24.754034   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:24.756375   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:25.254140   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:25.254162   35235 round_trippers.go:469] Request Headers:
I0621 18:45:25.254170   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:25.254175   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:25.256565   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:25.256661   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:25.753651   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:25.753684   35235 round_trippers.go:469] Request Headers:
I0621 18:45:25.753696   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:25.753701   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:25.757005   35235 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
I0621 18:45:26.253751   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:26.253776   35235 round_trippers.go:469] Request Headers:
I0621 18:45:26.253784   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:26.253788   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:26.256361   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:26.754109   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:26.754131   35235 round_trippers.go:469] Request Headers:
I0621 18:45:26.754138   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:26.754148   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:26.756397   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:27.254152   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:27.254177   35235 round_trippers.go:469] Request Headers:
I0621 18:45:27.254184   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:27.254188   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:27.256320   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:27.754068   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:27.754091   35235 round_trippers.go:469] Request Headers:
I0621 18:45:27.754097   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:27.754101   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:27.756571   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:27.756693   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:28.253240   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:28.253261   35235 round_trippers.go:469] Request Headers:
I0621 18:45:28.253270   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:28.253274   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:28.255463   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:28.753124   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:28.753146   35235 round_trippers.go:469] Request Headers:
I0621 18:45:28.753154   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:28.753157   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:28.755517   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:29.253209   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:29.253230   35235 round_trippers.go:469] Request Headers:
I0621 18:45:29.253240   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:29.253247   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:29.255668   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:29.753349   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:29.753371   35235 round_trippers.go:469] Request Headers:
I0621 18:45:29.753380   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:29.753385   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:29.755660   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:30.253379   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:30.253400   35235 round_trippers.go:469] Request Headers:
I0621 18:45:30.253409   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:30.253415   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:30.256048   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:30.256143   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:30.753921   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:30.753943   35235 round_trippers.go:469] Request Headers:
I0621 18:45:30.753965   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:30.753969   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:30.756730   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:31.253201   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:31.253226   35235 round_trippers.go:469] Request Headers:
I0621 18:45:31.253233   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:31.253238   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:31.256153   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:31.754019   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:31.754050   35235 round_trippers.go:469] Request Headers:
I0621 18:45:31.754061   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:31.754067   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:31.756429   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:32.253128   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:32.253153   35235 round_trippers.go:469] Request Headers:
I0621 18:45:32.253164   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:32.253169   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:32.255755   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:32.753493   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:32.753514   35235 round_trippers.go:469] Request Headers:
I0621 18:45:32.753521   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:32.753525   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:32.755977   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:32.756091   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:33.253724   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:33.253746   35235 round_trippers.go:469] Request Headers:
I0621 18:45:33.253756   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:33.253760   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:33.256314   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:33.754057   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:33.754082   35235 round_trippers.go:469] Request Headers:
I0621 18:45:33.754092   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:33.754098   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:33.756557   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:34.253231   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:34.253258   35235 round_trippers.go:469] Request Headers:
I0621 18:45:34.253268   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:34.253272   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:34.255728   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:34.753415   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:34.753440   35235 round_trippers.go:469] Request Headers:
I0621 18:45:34.753453   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:34.753461   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:34.755841   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:35.253551   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:35.253582   35235 round_trippers.go:469] Request Headers:
I0621 18:45:35.253593   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:35.253599   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:35.256278   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:35.256387   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:35.753300   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:35.753327   35235 round_trippers.go:469] Request Headers:
I0621 18:45:35.753337   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:35.753341   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:35.756209   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:36.253989   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:36.254015   35235 round_trippers.go:469] Request Headers:
I0621 18:45:36.254026   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:36.254034   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:36.256097   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:36.753872   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:36.753901   35235 round_trippers.go:469] Request Headers:
I0621 18:45:36.753912   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:36.753921   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:36.756059   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:37.253848   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:37.253871   35235 round_trippers.go:469] Request Headers:
I0621 18:45:37.253880   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:37.253884   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:37.256493   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:37.256590   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:37.753156   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:37.753178   35235 round_trippers.go:469] Request Headers:
I0621 18:45:37.753186   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:37.753192   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:37.755149   35235 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I0621 18:45:38.253771   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:38.253794   35235 round_trippers.go:469] Request Headers:
I0621 18:45:38.253825   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:38.253830   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:38.256160   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:38.753955   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:38.753985   35235 round_trippers.go:469] Request Headers:
I0621 18:45:38.753992   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:38.753997   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:38.756347   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:39.254098   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:39.254122   35235 round_trippers.go:469] Request Headers:
I0621 18:45:39.254129   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:39.254136   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:39.256402   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:39.754126   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:39.754149   35235 round_trippers.go:469] Request Headers:
I0621 18:45:39.754157   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:39.754161   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:39.756436   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:39.756550   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:40.253130   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:40.253152   35235 round_trippers.go:469] Request Headers:
I0621 18:45:40.253159   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:40.253163   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:40.255680   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:40.753528   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:40.753555   35235 round_trippers.go:469] Request Headers:
I0621 18:45:40.753565   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:40.753570   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:40.756173   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:41.253963   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:41.253994   35235 round_trippers.go:469] Request Headers:
I0621 18:45:41.254005   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:41.254009   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:41.256275   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:41.754083   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:41.754106   35235 round_trippers.go:469] Request Headers:
I0621 18:45:41.754113   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:41.754117   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:41.756504   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:41.756596   35235 node_ready.go:53] error getting node "ha-406291-m02": nodes "ha-406291-m02" not found
I0621 18:45:42.253204   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:42.253229   35235 round_trippers.go:469] Request Headers:
I0621 18:45:42.253237   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:42.253241   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:42.255314   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:42.753088   35235 round_trippers.go:463] GET https://192.168.39.198:8443/api/v1/nodes/ha-406291-m02
I0621 18:45:42.753119   35235 round_trippers.go:469] Request Headers:
I0621 18:45:42.753134   35235 round_trippers.go:473]     Accept: application/json, */*
I0621 18:45:42.753140   35235 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0621 18:45:42.755605   35235 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0621 18:45:42.755728   35235 node_ready.go:38] duration metric: took 4m0.002771633s for node "ha-406291-m02" to be "Ready" ...
I0621 18:45:42.757939   35235 out.go:177] 
W0621 18:45:42.759451   35235 out.go:239] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
W0621 18:45:42.759470   35235 out.go:239] * 
* 
W0621 18:45:42.761346   35235 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0621 18:45:42.762830   35235 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-406291 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (598.372102ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:45:42.992062   36198 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:45:42.992166   36198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:42.992174   36198 out.go:304] Setting ErrFile to fd 2...
	I0621 18:45:42.992178   36198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:42.992365   36198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:45:42.992522   36198 out.go:298] Setting JSON to false
	I0621 18:45:42.992542   36198 mustload.go:65] Loading cluster: ha-406291
	I0621 18:45:42.992650   36198 notify.go:220] Checking for updates...
	I0621 18:45:42.993395   36198 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:45:42.993442   36198 status.go:255] checking status of ha-406291 ...
	I0621 18:45:42.994432   36198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:42.994509   36198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:43.011507   36198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
	I0621 18:45:43.011954   36198 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:43.012543   36198 main.go:141] libmachine: Using API Version  1
	I0621 18:45:43.012566   36198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:43.012976   36198 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:43.013235   36198 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:45:43.015073   36198 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:45:43.015087   36198 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:43.015397   36198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:43.015450   36198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:43.030706   36198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34015
	I0621 18:45:43.031119   36198 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:43.031679   36198 main.go:141] libmachine: Using API Version  1
	I0621 18:45:43.031702   36198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:43.032030   36198 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:43.032223   36198 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:45:43.035431   36198 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:43.035850   36198 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:43.035893   36198 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:43.036024   36198 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:43.036422   36198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:43.036458   36198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:43.051230   36198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0621 18:45:43.051653   36198 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:43.052069   36198 main.go:141] libmachine: Using API Version  1
	I0621 18:45:43.052100   36198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:43.052449   36198 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:43.052659   36198 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:45:43.052886   36198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:43.052926   36198 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:45:43.055870   36198 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:43.056293   36198 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:43.056328   36198 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:43.056463   36198 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:45:43.056636   36198 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:45:43.056777   36198 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:45:43.056995   36198 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:45:43.134981   36198 ssh_runner.go:195] Run: systemctl --version
	I0621 18:45:43.141814   36198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:43.157750   36198 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:43.157786   36198 api_server.go:166] Checking apiserver status ...
	I0621 18:45:43.157860   36198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:45:43.173864   36198 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:45:43.183086   36198 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:43.183152   36198 ssh_runner.go:195] Run: ls
	I0621 18:45:43.188283   36198 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:45:43.192561   36198 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:45:43.192595   36198 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:45:43.192606   36198 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:43.192623   36198 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:45:43.192952   36198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:43.192984   36198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:43.208563   36198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0621 18:45:43.209082   36198 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:43.209553   36198 main.go:141] libmachine: Using API Version  1
	I0621 18:45:43.209575   36198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:43.209950   36198 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:43.210171   36198 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:45:43.211849   36198 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:45:43.211865   36198 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:43.212149   36198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:43.212195   36198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:43.227024   36198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I0621 18:45:43.227498   36198 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:43.227950   36198 main.go:141] libmachine: Using API Version  1
	I0621 18:45:43.227974   36198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:43.228316   36198 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:43.228551   36198 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:45:43.231518   36198 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:43.231971   36198 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:43.232007   36198 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:43.232203   36198 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:43.232532   36198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:43.232572   36198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:43.247511   36198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I0621 18:45:43.247976   36198 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:43.248491   36198 main.go:141] libmachine: Using API Version  1
	I0621 18:45:43.248510   36198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:43.248815   36198 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:43.249029   36198 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:45:43.249315   36198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:43.249334   36198 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:45:43.252497   36198 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:43.252981   36198 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:43.253010   36198 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:43.253183   36198 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:45:43.253396   36198 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:45:43.253597   36198 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:45:43.253752   36198 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:45:43.337391   36198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:43.353639   36198 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:43.353674   36198 api_server.go:166] Checking apiserver status ...
	I0621 18:45:43.353719   36198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:45:43.368997   36198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:43.369022   36198 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:45:43.369031   36198 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:43.369054   36198 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:45:43.369404   36198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:43.369447   36198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:43.385370   36198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0621 18:45:43.385812   36198 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:43.386329   36198 main.go:141] libmachine: Using API Version  1
	I0621 18:45:43.386350   36198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:43.386685   36198 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:43.386913   36198 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:45:43.388562   36198 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:45:43.388582   36198 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:43.388859   36198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:43.388895   36198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:43.405153   36198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
	I0621 18:45:43.405561   36198 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:43.406226   36198 main.go:141] libmachine: Using API Version  1
	I0621 18:45:43.406246   36198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:43.406610   36198 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:43.406875   36198 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:45:43.410304   36198 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:43.410883   36198 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:43.410904   36198 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:43.411071   36198 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:43.411383   36198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:43.411430   36198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:43.428496   36198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I0621 18:45:43.428949   36198 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:43.429429   36198 main.go:141] libmachine: Using API Version  1
	I0621 18:45:43.429462   36198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:43.429809   36198 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:43.430007   36198 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:45:43.430220   36198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:43.430240   36198 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:45:43.433101   36198 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:43.433525   36198 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:43.433554   36198 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:43.433665   36198 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:45:43.433851   36198 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:45:43.433997   36198 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:45:43.434231   36198 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:45:43.527167   36198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:43.545218   36198 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (566.018075ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:45:44.301640   36265 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:45:44.302170   36265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:44.302188   36265 out.go:304] Setting ErrFile to fd 2...
	I0621 18:45:44.302196   36265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:44.302721   36265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:45:44.303092   36265 out.go:298] Setting JSON to false
	I0621 18:45:44.303116   36265 mustload.go:65] Loading cluster: ha-406291
	I0621 18:45:44.303147   36265 notify.go:220] Checking for updates...
	I0621 18:45:44.303810   36265 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:45:44.303833   36265 status.go:255] checking status of ha-406291 ...
	I0621 18:45:44.304264   36265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:44.304339   36265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:44.323607   36265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0621 18:45:44.324054   36265 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:44.324638   36265 main.go:141] libmachine: Using API Version  1
	I0621 18:45:44.324668   36265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:44.324952   36265 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:44.325197   36265 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:45:44.326663   36265 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:45:44.326679   36265 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:44.326973   36265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:44.327008   36265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:44.341340   36265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0621 18:45:44.341832   36265 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:44.342694   36265 main.go:141] libmachine: Using API Version  1
	I0621 18:45:44.342717   36265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:44.342964   36265 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:44.343147   36265 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:45:44.346320   36265 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:44.346798   36265 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:44.346831   36265 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:44.346895   36265 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:44.347274   36265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:44.347318   36265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:44.362184   36265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0621 18:45:44.362588   36265 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:44.363041   36265 main.go:141] libmachine: Using API Version  1
	I0621 18:45:44.363060   36265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:44.363356   36265 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:44.363559   36265 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:45:44.363827   36265 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:44.363857   36265 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:45:44.366677   36265 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:44.367110   36265 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:44.367151   36265 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:44.367249   36265 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:45:44.367448   36265 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:45:44.367581   36265 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:45:44.367715   36265 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:45:44.444956   36265 ssh_runner.go:195] Run: systemctl --version
	I0621 18:45:44.451867   36265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:44.467895   36265 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:44.467926   36265 api_server.go:166] Checking apiserver status ...
	I0621 18:45:44.467964   36265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:45:44.481380   36265 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:45:44.495445   36265 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:44.495491   36265 ssh_runner.go:195] Run: ls
	I0621 18:45:44.502457   36265 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:45:44.506489   36265 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:45:44.506509   36265 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:45:44.506518   36265 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:44.506533   36265 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:45:44.506819   36265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:44.506849   36265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:44.523292   36265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40675
	I0621 18:45:44.523708   36265 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:44.524220   36265 main.go:141] libmachine: Using API Version  1
	I0621 18:45:44.524244   36265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:44.524537   36265 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:44.524736   36265 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:45:44.526235   36265 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:45:44.526248   36265 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:44.526513   36265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:44.526549   36265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:44.541482   36265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0621 18:45:44.541898   36265 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:44.542390   36265 main.go:141] libmachine: Using API Version  1
	I0621 18:45:44.542409   36265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:44.542723   36265 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:44.542897   36265 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:45:44.545429   36265 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:44.545757   36265 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:44.545779   36265 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:44.545937   36265 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:44.546277   36265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:44.546311   36265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:44.561032   36265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0621 18:45:44.561408   36265 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:44.561872   36265 main.go:141] libmachine: Using API Version  1
	I0621 18:45:44.561897   36265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:44.562191   36265 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:44.562380   36265 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:45:44.562584   36265 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:44.562603   36265 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:45:44.565315   36265 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:44.565662   36265 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:44.565694   36265 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:44.565841   36265 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:45:44.566036   36265 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:45:44.566193   36265 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:45:44.566304   36265 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:45:44.644556   36265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:44.657741   36265 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:44.657777   36265 api_server.go:166] Checking apiserver status ...
	I0621 18:45:44.657850   36265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:45:44.669165   36265 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:44.669189   36265 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:45:44.669200   36265 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:44.669237   36265 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:45:44.669542   36265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:44.669584   36265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:44.685720   36265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43567
	I0621 18:45:44.686143   36265 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:44.686664   36265 main.go:141] libmachine: Using API Version  1
	I0621 18:45:44.686687   36265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:44.686968   36265 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:44.687160   36265 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:45:44.688681   36265 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:45:44.688695   36265 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:44.688982   36265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:44.689017   36265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:44.703703   36265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0621 18:45:44.704196   36265 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:44.704700   36265 main.go:141] libmachine: Using API Version  1
	I0621 18:45:44.704725   36265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:44.704993   36265 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:44.705143   36265 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:45:44.708312   36265 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:44.708742   36265 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:44.708762   36265 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:44.708943   36265 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:44.709230   36265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:44.709270   36265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:44.724970   36265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0621 18:45:44.725913   36265 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:44.726397   36265 main.go:141] libmachine: Using API Version  1
	I0621 18:45:44.726415   36265 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:44.726675   36265 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:44.726869   36265 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:45:44.727060   36265 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:44.727079   36265 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:45:44.730131   36265 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:44.730581   36265 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:44.730619   36265 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:44.730765   36265 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:45:44.730935   36265 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:45:44.731093   36265 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:45:44.731189   36265 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:45:44.813078   36265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:44.826402   36265 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (552.557731ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:45:45.933361   36346 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:45:45.933632   36346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:45.933643   36346 out.go:304] Setting ErrFile to fd 2...
	I0621 18:45:45.933649   36346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:45.933880   36346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:45:45.934079   36346 out.go:298] Setting JSON to false
	I0621 18:45:45.934107   36346 mustload.go:65] Loading cluster: ha-406291
	I0621 18:45:45.934207   36346 notify.go:220] Checking for updates...
	I0621 18:45:45.935549   36346 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:45:45.935586   36346 status.go:255] checking status of ha-406291 ...
	I0621 18:45:45.936302   36346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:45.936358   36346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:45.952783   36346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I0621 18:45:45.953237   36346 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:45.953982   36346 main.go:141] libmachine: Using API Version  1
	I0621 18:45:45.954047   36346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:45.954453   36346 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:45.954656   36346 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:45:45.956335   36346 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:45:45.956349   36346 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:45.956681   36346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:45.956731   36346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:45.971740   36346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0621 18:45:45.972479   36346 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:45.972899   36346 main.go:141] libmachine: Using API Version  1
	I0621 18:45:45.972920   36346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:45.973236   36346 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:45.973400   36346 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:45:45.976337   36346 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:45.976743   36346 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:45.976785   36346 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:45.976939   36346 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:45.977211   36346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:45.977246   36346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:45.992999   36346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0621 18:45:45.993369   36346 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:45.993879   36346 main.go:141] libmachine: Using API Version  1
	I0621 18:45:45.993904   36346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:45.994210   36346 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:45.994501   36346 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:45:45.994664   36346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:45.994684   36346 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:45:45.997170   36346 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:45.997574   36346 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:45.997593   36346 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:45.997718   36346 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:45:45.997885   36346 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:45:45.998072   36346 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:45:45.998227   36346 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:45:46.073183   36346 ssh_runner.go:195] Run: systemctl --version
	I0621 18:45:46.079079   36346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:46.093312   36346 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:46.093346   36346 api_server.go:166] Checking apiserver status ...
	I0621 18:45:46.093386   36346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:45:46.107084   36346 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:45:46.116574   36346 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:46.116625   36346 ssh_runner.go:195] Run: ls
	I0621 18:45:46.120978   36346 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:45:46.125201   36346 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:45:46.125222   36346 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:45:46.125231   36346 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:46.125245   36346 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:45:46.125521   36346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:46.125554   36346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:46.140956   36346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0621 18:45:46.141432   36346 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:46.141929   36346 main.go:141] libmachine: Using API Version  1
	I0621 18:45:46.141950   36346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:46.142302   36346 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:46.142493   36346 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:45:46.144014   36346 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:45:46.144028   36346 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:46.144353   36346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:46.144396   36346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:46.159593   36346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0621 18:45:46.160084   36346 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:46.160548   36346 main.go:141] libmachine: Using API Version  1
	I0621 18:45:46.160572   36346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:46.160829   36346 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:46.160985   36346 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:45:46.163543   36346 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:46.163916   36346 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:46.163939   36346 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:46.164071   36346 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:46.164368   36346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:46.164416   36346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:46.179354   36346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34563
	I0621 18:45:46.179731   36346 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:46.180169   36346 main.go:141] libmachine: Using API Version  1
	I0621 18:45:46.180185   36346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:46.180431   36346 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:46.180572   36346 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:45:46.180777   36346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:46.180801   36346 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:45:46.183515   36346 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:46.183889   36346 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:46.183917   36346 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:46.184059   36346 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:45:46.184233   36346 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:45:46.184362   36346 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:45:46.184480   36346 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:45:46.260755   36346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:46.275050   36346 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:46.275074   36346 api_server.go:166] Checking apiserver status ...
	I0621 18:45:46.275101   36346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:45:46.286912   36346 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:46.286947   36346 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:45:46.286956   36346 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:46.286979   36346 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:45:46.287374   36346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:46.287410   36346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:46.303000   36346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I0621 18:45:46.303401   36346 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:46.303866   36346 main.go:141] libmachine: Using API Version  1
	I0621 18:45:46.303887   36346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:46.304215   36346 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:46.304370   36346 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:45:46.306143   36346 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:45:46.306161   36346 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:46.306445   36346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:46.306494   36346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:46.321934   36346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35893
	I0621 18:45:46.322329   36346 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:46.322781   36346 main.go:141] libmachine: Using API Version  1
	I0621 18:45:46.322804   36346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:46.323205   36346 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:46.323423   36346 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:45:46.326235   36346 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:46.326681   36346 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:46.326711   36346 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:46.326809   36346 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:46.327175   36346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:46.327219   36346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:46.344257   36346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0621 18:45:46.344704   36346 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:46.345165   36346 main.go:141] libmachine: Using API Version  1
	I0621 18:45:46.345187   36346 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:46.345503   36346 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:46.345654   36346 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:45:46.345855   36346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:46.345878   36346 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:45:46.348294   36346 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:46.348729   36346 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:46.348751   36346 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:46.348884   36346 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:45:46.349076   36346 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:45:46.349231   36346 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:45:46.349359   36346 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:45:46.432625   36346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:46.445771   36346 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (564.457845ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:45:48.568350   36412 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:45:48.568629   36412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:48.568641   36412 out.go:304] Setting ErrFile to fd 2...
	I0621 18:45:48.568645   36412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:48.568811   36412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:45:48.569001   36412 out.go:298] Setting JSON to false
	I0621 18:45:48.569032   36412 mustload.go:65] Loading cluster: ha-406291
	I0621 18:45:48.569068   36412 notify.go:220] Checking for updates...
	I0621 18:45:48.569542   36412 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:45:48.569563   36412 status.go:255] checking status of ha-406291 ...
	I0621 18:45:48.570126   36412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:48.570163   36412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:48.588791   36412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0621 18:45:48.589292   36412 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:48.589865   36412 main.go:141] libmachine: Using API Version  1
	I0621 18:45:48.589882   36412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:48.590219   36412 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:48.590953   36412 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:45:48.592701   36412 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:45:48.592715   36412 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:48.592979   36412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:48.593011   36412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:48.608186   36412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I0621 18:45:48.608622   36412 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:48.609145   36412 main.go:141] libmachine: Using API Version  1
	I0621 18:45:48.609181   36412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:48.609516   36412 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:48.609712   36412 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:45:48.612491   36412 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:48.612946   36412 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:48.612966   36412 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:48.613135   36412 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:48.613489   36412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:48.613531   36412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:48.628901   36412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39905
	I0621 18:45:48.629344   36412 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:48.629754   36412 main.go:141] libmachine: Using API Version  1
	I0621 18:45:48.629778   36412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:48.630094   36412 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:48.630282   36412 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:45:48.630501   36412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:48.630522   36412 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:45:48.633515   36412 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:48.633894   36412 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:48.633921   36412 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:48.634109   36412 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:45:48.634308   36412 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:45:48.634478   36412 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:45:48.634671   36412 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:45:48.713068   36412 ssh_runner.go:195] Run: systemctl --version
	I0621 18:45:48.718786   36412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:48.734268   36412 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:48.734313   36412 api_server.go:166] Checking apiserver status ...
	I0621 18:45:48.734369   36412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:45:48.748683   36412 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:45:48.761843   36412 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:48.761918   36412 ssh_runner.go:195] Run: ls
	I0621 18:45:48.767628   36412 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:45:48.772117   36412 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:45:48.772145   36412 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:45:48.772158   36412 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:48.772179   36412 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:45:48.772592   36412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:48.772640   36412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:48.787806   36412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46565
	I0621 18:45:48.788247   36412 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:48.788782   36412 main.go:141] libmachine: Using API Version  1
	I0621 18:45:48.788807   36412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:48.789125   36412 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:48.789329   36412 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:45:48.790928   36412 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:45:48.790944   36412 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:48.791237   36412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:48.791270   36412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:48.806964   36412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40253
	I0621 18:45:48.807382   36412 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:48.807904   36412 main.go:141] libmachine: Using API Version  1
	I0621 18:45:48.807931   36412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:48.808232   36412 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:48.808426   36412 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:45:48.810905   36412 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:48.811373   36412 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:48.811401   36412 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:48.811570   36412 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:48.811905   36412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:48.811940   36412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:48.826813   36412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0621 18:45:48.827237   36412 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:48.827677   36412 main.go:141] libmachine: Using API Version  1
	I0621 18:45:48.827696   36412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:48.827995   36412 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:48.828195   36412 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:45:48.828379   36412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:48.828397   36412 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:45:48.831518   36412 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:48.831867   36412 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:48.831903   36412 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:48.832038   36412 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:45:48.832185   36412 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:45:48.832331   36412 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:45:48.832481   36412 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:45:48.908790   36412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:48.922240   36412 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:48.922265   36412 api_server.go:166] Checking apiserver status ...
	I0621 18:45:48.922298   36412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:45:48.933888   36412 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:48.933917   36412 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:45:48.933928   36412 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:48.933949   36412 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:45:48.934297   36412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:48.934335   36412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:48.949444   36412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39787
	I0621 18:45:48.949857   36412 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:48.950311   36412 main.go:141] libmachine: Using API Version  1
	I0621 18:45:48.950331   36412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:48.950634   36412 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:48.950797   36412 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:45:48.952314   36412 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:45:48.952329   36412 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:48.952609   36412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:48.952640   36412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:48.967689   36412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I0621 18:45:48.968095   36412 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:48.968494   36412 main.go:141] libmachine: Using API Version  1
	I0621 18:45:48.968511   36412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:48.968780   36412 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:48.968927   36412 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:45:48.971686   36412 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:48.972103   36412 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:48.972128   36412 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:48.972306   36412 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:48.972577   36412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:48.972619   36412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:48.987338   36412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0621 18:45:48.987697   36412 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:48.988132   36412 main.go:141] libmachine: Using API Version  1
	I0621 18:45:48.988159   36412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:48.988528   36412 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:48.988698   36412 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:45:48.988876   36412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:48.988896   36412 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:45:48.991556   36412 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:48.991930   36412 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:48.991956   36412 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:48.992086   36412 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:45:48.992270   36412 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:45:48.992394   36412 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:45:48.992501   36412 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:45:49.077040   36412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:49.091690   36412 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (576.527444ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:45:51.289442   36494 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:45:51.289562   36494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:51.289569   36494 out.go:304] Setting ErrFile to fd 2...
	I0621 18:45:51.289573   36494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:51.289722   36494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:45:51.289899   36494 out.go:298] Setting JSON to false
	I0621 18:45:51.289928   36494 mustload.go:65] Loading cluster: ha-406291
	I0621 18:45:51.289961   36494 notify.go:220] Checking for updates...
	I0621 18:45:51.290316   36494 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:45:51.290329   36494 status.go:255] checking status of ha-406291 ...
	I0621 18:45:51.290670   36494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:51.290730   36494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:51.310870   36494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35365
	I0621 18:45:51.311269   36494 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:51.311778   36494 main.go:141] libmachine: Using API Version  1
	I0621 18:45:51.311802   36494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:51.312190   36494 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:51.312365   36494 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:45:51.314230   36494 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:45:51.314247   36494 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:51.314580   36494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:51.314627   36494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:51.329713   36494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0621 18:45:51.330124   36494 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:51.330540   36494 main.go:141] libmachine: Using API Version  1
	I0621 18:45:51.330566   36494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:51.330851   36494 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:51.331048   36494 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:45:51.333604   36494 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:51.334027   36494 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:51.334062   36494 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:51.334168   36494 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:51.334453   36494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:51.334484   36494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:51.349642   36494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39913
	I0621 18:45:51.350033   36494 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:51.350575   36494 main.go:141] libmachine: Using API Version  1
	I0621 18:45:51.350599   36494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:51.350932   36494 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:51.351127   36494 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:45:51.351306   36494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:51.351329   36494 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:45:51.354154   36494 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:51.354705   36494 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:51.354730   36494 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:51.354853   36494 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:45:51.355038   36494 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:45:51.355169   36494 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:45:51.355319   36494 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:45:51.428878   36494 ssh_runner.go:195] Run: systemctl --version
	I0621 18:45:51.435753   36494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:51.451810   36494 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:51.451842   36494 api_server.go:166] Checking apiserver status ...
	I0621 18:45:51.451873   36494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:45:51.466668   36494 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:45:51.475332   36494 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:51.475388   36494 ssh_runner.go:195] Run: ls
	I0621 18:45:51.479607   36494 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:45:51.483706   36494 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:45:51.483728   36494 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:45:51.483736   36494 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:51.483751   36494 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:45:51.484147   36494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:51.484181   36494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:51.499705   36494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0621 18:45:51.500144   36494 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:51.500635   36494 main.go:141] libmachine: Using API Version  1
	I0621 18:45:51.500654   36494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:51.500918   36494 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:51.501095   36494 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:45:51.502735   36494 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:45:51.502751   36494 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:51.503092   36494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:51.503129   36494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:51.520103   36494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0621 18:45:51.520520   36494 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:51.520976   36494 main.go:141] libmachine: Using API Version  1
	I0621 18:45:51.521006   36494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:51.521313   36494 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:51.521519   36494 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:45:51.524133   36494 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:51.524544   36494 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:51.524583   36494 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:51.524713   36494 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:51.525045   36494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:51.525086   36494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:51.540007   36494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
	I0621 18:45:51.540472   36494 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:51.540904   36494 main.go:141] libmachine: Using API Version  1
	I0621 18:45:51.540922   36494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:51.541237   36494 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:51.541427   36494 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:45:51.541622   36494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:51.541642   36494 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:45:51.544486   36494 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:51.544863   36494 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:51.544890   36494 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:51.545077   36494 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:45:51.545274   36494 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:45:51.545476   36494 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:45:51.545622   36494 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:45:51.624478   36494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:51.640680   36494 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:51.640724   36494 api_server.go:166] Checking apiserver status ...
	I0621 18:45:51.640766   36494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:45:51.659642   36494 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:51.659667   36494 status.go:422] ha-406291-m02 apiserver status = Running (err=<nil>)
	I0621 18:45:51.659677   36494 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:51.659706   36494 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:45:51.660162   36494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:51.660221   36494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:51.676733   36494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
	I0621 18:45:51.677667   36494 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:51.678219   36494 main.go:141] libmachine: Using API Version  1
	I0621 18:45:51.678247   36494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:51.678646   36494 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:51.678893   36494 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:45:51.680824   36494 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:45:51.680842   36494 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:51.681285   36494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:51.681343   36494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:51.697448   36494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0621 18:45:51.697948   36494 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:51.698514   36494 main.go:141] libmachine: Using API Version  1
	I0621 18:45:51.698538   36494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:51.698864   36494 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:51.699117   36494 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:45:51.702113   36494 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:51.702519   36494 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:51.702550   36494 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:51.702695   36494 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:51.703038   36494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:51.703087   36494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:51.719857   36494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39317
	I0621 18:45:51.720464   36494 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:51.720899   36494 main.go:141] libmachine: Using API Version  1
	I0621 18:45:51.720945   36494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:51.721253   36494 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:51.721460   36494 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:45:51.721700   36494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:51.721722   36494 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:45:51.724766   36494 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:51.725199   36494 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:51.725231   36494 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:51.725414   36494 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:45:51.725579   36494 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:45:51.725745   36494 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:45:51.725914   36494 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:45:51.809773   36494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:51.824140   36494 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0621 18:45:54.862044   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (575.248768ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:45:56.785867   36575 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:45:56.785969   36575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:56.785976   36575 out.go:304] Setting ErrFile to fd 2...
	I0621 18:45:56.785979   36575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:45:56.786166   36575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:45:56.786329   36575 out.go:298] Setting JSON to false
	I0621 18:45:56.786351   36575 mustload.go:65] Loading cluster: ha-406291
	I0621 18:45:56.786393   36575 notify.go:220] Checking for updates...
	I0621 18:45:56.786691   36575 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:45:56.786708   36575 status.go:255] checking status of ha-406291 ...
	I0621 18:45:56.787154   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:56.787231   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:56.807093   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0621 18:45:56.807526   36575 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:56.808020   36575 main.go:141] libmachine: Using API Version  1
	I0621 18:45:56.808045   36575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:56.808457   36575 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:56.808673   36575 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:45:56.810261   36575 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:45:56.810289   36575 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:56.810615   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:56.810643   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:56.826346   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I0621 18:45:56.826729   36575 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:56.827274   36575 main.go:141] libmachine: Using API Version  1
	I0621 18:45:56.827297   36575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:56.827596   36575 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:56.827784   36575 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:45:56.830206   36575 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:56.830587   36575 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:56.830613   36575 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:56.830765   36575 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:45:56.831095   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:56.831131   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:56.845531   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35291
	I0621 18:45:56.846005   36575 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:56.846486   36575 main.go:141] libmachine: Using API Version  1
	I0621 18:45:56.846506   36575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:56.846850   36575 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:56.847044   36575 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:45:56.847274   36575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:56.847307   36575 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:45:56.850206   36575 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:56.850621   36575 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:45:56.850656   36575 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:45:56.850805   36575 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:45:56.850995   36575 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:45:56.851124   36575 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:45:56.851349   36575 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:45:56.929488   36575 ssh_runner.go:195] Run: systemctl --version
	I0621 18:45:56.935569   36575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:56.954210   36575 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:56.954240   36575 api_server.go:166] Checking apiserver status ...
	I0621 18:45:56.954276   36575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:45:56.971502   36575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:45:56.983212   36575 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:56.983306   36575 ssh_runner.go:195] Run: ls
	I0621 18:45:56.987544   36575 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:45:56.991697   36575 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:45:56.991718   36575 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:45:56.991727   36575 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:56.991752   36575 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:45:56.992047   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:56.992069   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:57.007507   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I0621 18:45:57.007979   36575 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:57.008509   36575 main.go:141] libmachine: Using API Version  1
	I0621 18:45:57.008531   36575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:57.008860   36575 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:57.009096   36575 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:45:57.010773   36575 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:45:57.010792   36575 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:57.011178   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:57.011217   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:57.026334   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42405
	I0621 18:45:57.026882   36575 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:57.027394   36575 main.go:141] libmachine: Using API Version  1
	I0621 18:45:57.027413   36575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:57.027684   36575 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:57.027852   36575 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:45:57.031570   36575 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:57.032179   36575 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:57.032199   36575 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:57.032375   36575 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:45:57.032713   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:57.032755   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:57.047855   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
	I0621 18:45:57.048326   36575 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:57.048825   36575 main.go:141] libmachine: Using API Version  1
	I0621 18:45:57.048844   36575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:57.049145   36575 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:57.049345   36575 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:45:57.049568   36575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:57.049585   36575 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:45:57.052561   36575 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:57.053013   36575 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:45:57.053045   36575 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:45:57.053141   36575 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:45:57.053310   36575 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:45:57.053451   36575 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:45:57.053598   36575 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:45:57.132392   36575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:57.147914   36575 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:45:57.147946   36575 api_server.go:166] Checking apiserver status ...
	I0621 18:45:57.148007   36575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:45:57.160928   36575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:45:57.160951   36575 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:45:57.160960   36575 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:45:57.160992   36575 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:45:57.161276   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:57.161299   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:57.176259   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0621 18:45:57.176659   36575 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:57.177070   36575 main.go:141] libmachine: Using API Version  1
	I0621 18:45:57.177093   36575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:57.177424   36575 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:57.177605   36575 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:45:57.179045   36575 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:45:57.179062   36575 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:57.179425   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:57.179459   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:57.194005   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0621 18:45:57.194510   36575 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:57.194995   36575 main.go:141] libmachine: Using API Version  1
	I0621 18:45:57.195020   36575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:57.195341   36575 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:57.195514   36575 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:45:57.198188   36575 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:57.198554   36575 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:57.198574   36575 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:57.198726   36575 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:45:57.199096   36575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:45:57.199130   36575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:45:57.214403   36575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I0621 18:45:57.214778   36575 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:45:57.215206   36575 main.go:141] libmachine: Using API Version  1
	I0621 18:45:57.215231   36575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:45:57.215521   36575 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:45:57.215712   36575 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:45:57.215942   36575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:45:57.215962   36575 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:45:57.218729   36575 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:57.219108   36575 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:45:57.219127   36575 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:45:57.219340   36575 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:45:57.219516   36575 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:45:57.219661   36575 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:45:57.219782   36575 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:45:57.305558   36575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:45:57.319784   36575 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (558.50044ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:46:03.192903   36657 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:46:03.193007   36657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:46:03.193015   36657 out.go:304] Setting ErrFile to fd 2...
	I0621 18:46:03.193019   36657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:46:03.193186   36657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:46:03.193341   36657 out.go:298] Setting JSON to false
	I0621 18:46:03.193362   36657 mustload.go:65] Loading cluster: ha-406291
	I0621 18:46:03.193419   36657 notify.go:220] Checking for updates...
	I0621 18:46:03.193745   36657 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:46:03.193760   36657 status.go:255] checking status of ha-406291 ...
	I0621 18:46:03.194309   36657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:03.194365   36657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:03.209880   36657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I0621 18:46:03.210318   36657 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:03.210822   36657 main.go:141] libmachine: Using API Version  1
	I0621 18:46:03.210837   36657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:03.211175   36657 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:03.211402   36657 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:46:03.212927   36657 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:46:03.212944   36657 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:46:03.213255   36657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:03.213278   36657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:03.229412   36657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0621 18:46:03.229867   36657 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:03.230347   36657 main.go:141] libmachine: Using API Version  1
	I0621 18:46:03.230365   36657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:03.230659   36657 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:03.230827   36657 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:46:03.233507   36657 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:03.233926   36657 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:46:03.233982   36657 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:03.234093   36657 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:46:03.234457   36657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:03.234495   36657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:03.249353   36657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0621 18:46:03.249778   36657 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:03.250253   36657 main.go:141] libmachine: Using API Version  1
	I0621 18:46:03.250295   36657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:03.250630   36657 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:03.251006   36657 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:46:03.251235   36657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:46:03.251262   36657 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:46:03.254461   36657 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:03.254943   36657 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:46:03.254964   36657 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:03.255127   36657 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:46:03.255327   36657 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:46:03.255475   36657 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:46:03.255603   36657 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:46:03.333042   36657 ssh_runner.go:195] Run: systemctl --version
	I0621 18:46:03.338806   36657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:46:03.353274   36657 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:46:03.353304   36657 api_server.go:166] Checking apiserver status ...
	I0621 18:46:03.353335   36657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:46:03.368107   36657 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:46:03.377969   36657 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:46:03.378044   36657 ssh_runner.go:195] Run: ls
	I0621 18:46:03.382260   36657 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:46:03.386403   36657 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:46:03.386426   36657 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:46:03.386435   36657 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:46:03.386454   36657 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:46:03.386747   36657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:03.386784   36657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:03.401825   36657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I0621 18:46:03.402343   36657 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:03.402828   36657 main.go:141] libmachine: Using API Version  1
	I0621 18:46:03.402859   36657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:03.403249   36657 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:03.403493   36657 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:46:03.405257   36657 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:46:03.405274   36657 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:46:03.405556   36657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:03.405579   36657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:03.420822   36657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I0621 18:46:03.421364   36657 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:03.421946   36657 main.go:141] libmachine: Using API Version  1
	I0621 18:46:03.421970   36657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:03.422318   36657 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:03.422558   36657 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:46:03.425183   36657 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:03.425571   36657 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:46:03.425592   36657 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:03.425759   36657 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:46:03.426094   36657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:03.426133   36657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:03.440655   36657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0621 18:46:03.441074   36657 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:03.441497   36657 main.go:141] libmachine: Using API Version  1
	I0621 18:46:03.441522   36657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:03.441850   36657 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:03.442100   36657 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:46:03.442303   36657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:46:03.442328   36657 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:46:03.444673   36657 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:03.445039   36657 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:46:03.445068   36657 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:03.445163   36657 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:46:03.445357   36657 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:46:03.445492   36657 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:46:03.445639   36657 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:46:03.524965   36657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:46:03.539695   36657 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:46:03.539724   36657 api_server.go:166] Checking apiserver status ...
	I0621 18:46:03.539767   36657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:46:03.552007   36657 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:46:03.552037   36657 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:46:03.552048   36657 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:46:03.552069   36657 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:46:03.552384   36657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:03.552421   36657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:03.567498   36657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38417
	I0621 18:46:03.567911   36657 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:03.568417   36657 main.go:141] libmachine: Using API Version  1
	I0621 18:46:03.568442   36657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:03.568761   36657 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:03.568986   36657 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:46:03.570628   36657 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:46:03.570647   36657 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:46:03.571039   36657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:03.571070   36657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:03.586796   36657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33101
	I0621 18:46:03.587233   36657 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:03.587698   36657 main.go:141] libmachine: Using API Version  1
	I0621 18:46:03.587718   36657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:03.588076   36657 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:03.588321   36657 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:46:03.591360   36657 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:03.591842   36657 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:46:03.591870   36657 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:03.592039   36657 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:46:03.592461   36657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:03.592510   36657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:03.607701   36657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0621 18:46:03.608190   36657 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:03.608645   36657 main.go:141] libmachine: Using API Version  1
	I0621 18:46:03.608666   36657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:03.609021   36657 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:03.609217   36657 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:46:03.609413   36657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:46:03.609433   36657 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:46:03.612186   36657 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:03.612610   36657 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:46:03.612645   36657 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:03.612804   36657 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:46:03.612965   36657 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:46:03.613094   36657 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:46:03.613235   36657 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:46:03.697360   36657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:46:03.711087   36657 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (569.15145ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:46:12.332273   36755 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:46:12.332535   36755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:46:12.332545   36755 out.go:304] Setting ErrFile to fd 2...
	I0621 18:46:12.332550   36755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:46:12.332766   36755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:46:12.332967   36755 out.go:298] Setting JSON to false
	I0621 18:46:12.332993   36755 mustload.go:65] Loading cluster: ha-406291
	I0621 18:46:12.333101   36755 notify.go:220] Checking for updates...
	I0621 18:46:12.333428   36755 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:46:12.333448   36755 status.go:255] checking status of ha-406291 ...
	I0621 18:46:12.333922   36755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:12.333978   36755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:12.353851   36755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40161
	I0621 18:46:12.354358   36755 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:12.354937   36755 main.go:141] libmachine: Using API Version  1
	I0621 18:46:12.354963   36755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:12.355383   36755 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:12.355586   36755 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:46:12.357438   36755 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:46:12.357452   36755 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:46:12.357821   36755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:12.357887   36755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:12.372610   36755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38933
	I0621 18:46:12.373025   36755 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:12.373528   36755 main.go:141] libmachine: Using API Version  1
	I0621 18:46:12.373551   36755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:12.373852   36755 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:12.374027   36755 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:46:12.376640   36755 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:12.377257   36755 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:46:12.377310   36755 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:12.377468   36755 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:46:12.377755   36755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:12.377789   36755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:12.392650   36755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
	I0621 18:46:12.393008   36755 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:12.393567   36755 main.go:141] libmachine: Using API Version  1
	I0621 18:46:12.393599   36755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:12.393951   36755 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:12.394256   36755 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:46:12.394436   36755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:46:12.394466   36755 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:46:12.397390   36755 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:12.397821   36755 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:46:12.397853   36755 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:12.398032   36755 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:46:12.398195   36755 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:46:12.398391   36755 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:46:12.398549   36755 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:46:12.472849   36755 ssh_runner.go:195] Run: systemctl --version
	I0621 18:46:12.478718   36755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:46:12.494346   36755 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:46:12.494382   36755 api_server.go:166] Checking apiserver status ...
	I0621 18:46:12.494420   36755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:46:12.512123   36755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:46:12.521848   36755 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:46:12.521909   36755 ssh_runner.go:195] Run: ls
	I0621 18:46:12.525998   36755 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:46:12.530066   36755 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:46:12.530088   36755 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:46:12.530098   36755 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:46:12.530117   36755 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:46:12.530428   36755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:12.530460   36755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:12.544853   36755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
	I0621 18:46:12.545322   36755 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:12.545836   36755 main.go:141] libmachine: Using API Version  1
	I0621 18:46:12.545858   36755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:12.546156   36755 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:12.546345   36755 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:46:12.547778   36755 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:46:12.547792   36755 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:46:12.548052   36755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:12.548085   36755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:12.562641   36755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33967
	I0621 18:46:12.563071   36755 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:12.563499   36755 main.go:141] libmachine: Using API Version  1
	I0621 18:46:12.563517   36755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:12.563824   36755 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:12.564047   36755 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:46:12.566909   36755 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:12.567315   36755 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:46:12.567333   36755 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:12.567494   36755 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:46:12.567791   36755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:12.567824   36755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:12.582479   36755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0621 18:46:12.582933   36755 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:12.583433   36755 main.go:141] libmachine: Using API Version  1
	I0621 18:46:12.583452   36755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:12.583723   36755 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:12.583936   36755 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:46:12.584111   36755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:46:12.584133   36755 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:46:12.586890   36755 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:12.587368   36755 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:46:12.587401   36755 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:12.587496   36755 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:46:12.587671   36755 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:46:12.587827   36755 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:46:12.587968   36755 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:46:12.668455   36755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:46:12.684257   36755 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:46:12.684290   36755 api_server.go:166] Checking apiserver status ...
	I0621 18:46:12.684320   36755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:46:12.700729   36755 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:46:12.700752   36755 status.go:422] ha-406291-m02 apiserver status = Running (err=<nil>)
	I0621 18:46:12.700763   36755 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:46:12.700782   36755 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:46:12.701151   36755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:12.701198   36755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:12.716011   36755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37427
	I0621 18:46:12.716462   36755 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:12.716923   36755 main.go:141] libmachine: Using API Version  1
	I0621 18:46:12.716943   36755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:12.717264   36755 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:12.717460   36755 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:46:12.718895   36755 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:46:12.718913   36755 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:46:12.719343   36755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:12.719389   36755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:12.735230   36755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0621 18:46:12.735687   36755 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:12.736154   36755 main.go:141] libmachine: Using API Version  1
	I0621 18:46:12.736175   36755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:12.736535   36755 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:12.736732   36755 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:46:12.739751   36755 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:12.740216   36755 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:46:12.740239   36755 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:12.740394   36755 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:46:12.740730   36755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:12.740770   36755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:12.756070   36755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35897
	I0621 18:46:12.756460   36755 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:12.756908   36755 main.go:141] libmachine: Using API Version  1
	I0621 18:46:12.756933   36755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:12.757254   36755 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:12.757455   36755 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:46:12.757626   36755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:46:12.757643   36755 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:46:12.760373   36755 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:12.760736   36755 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:46:12.760772   36755 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:12.760926   36755 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:46:12.761081   36755 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:46:12.761252   36755 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:46:12.761394   36755 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:46:12.845260   36755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:46:12.858932   36755 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (558.698166ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-406291-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:46:22.738558   36854 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:46:22.738811   36854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:46:22.738820   36854 out.go:304] Setting ErrFile to fd 2...
	I0621 18:46:22.738824   36854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:46:22.738996   36854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:46:22.739165   36854 out.go:298] Setting JSON to false
	I0621 18:46:22.739187   36854 mustload.go:65] Loading cluster: ha-406291
	I0621 18:46:22.739240   36854 notify.go:220] Checking for updates...
	I0621 18:46:22.739713   36854 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:46:22.739738   36854 status.go:255] checking status of ha-406291 ...
	I0621 18:46:22.740166   36854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:22.740234   36854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:22.755913   36854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42805
	I0621 18:46:22.756406   36854 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:22.756957   36854 main.go:141] libmachine: Using API Version  1
	I0621 18:46:22.756981   36854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:22.757443   36854 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:22.757608   36854 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:46:22.759378   36854 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:46:22.759395   36854 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:46:22.759710   36854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:22.759758   36854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:22.774574   36854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0621 18:46:22.774975   36854 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:22.775433   36854 main.go:141] libmachine: Using API Version  1
	I0621 18:46:22.775456   36854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:22.775865   36854 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:22.776106   36854 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:46:22.779384   36854 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:22.779769   36854 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:46:22.779807   36854 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:22.779955   36854 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:46:22.780283   36854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:22.780316   36854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:22.794586   36854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0621 18:46:22.795032   36854 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:22.795672   36854 main.go:141] libmachine: Using API Version  1
	I0621 18:46:22.795692   36854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:22.795986   36854 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:22.796189   36854 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:46:22.796414   36854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:46:22.796461   36854 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:46:22.799358   36854 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:22.799792   36854 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:46:22.799819   36854 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:46:22.799997   36854 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:46:22.800188   36854 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:46:22.800326   36854 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:46:22.800466   36854 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:46:22.877082   36854 ssh_runner.go:195] Run: systemctl --version
	I0621 18:46:22.883020   36854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:46:22.896047   36854 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:46:22.896080   36854 api_server.go:166] Checking apiserver status ...
	I0621 18:46:22.896112   36854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:46:22.909381   36854 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0621 18:46:22.921960   36854 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:46:22.922018   36854 ssh_runner.go:195] Run: ls
	I0621 18:46:22.926239   36854 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:46:22.930282   36854 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:46:22.930308   36854 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:46:22.930321   36854 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:46:22.930337   36854 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:46:22.930711   36854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:22.930753   36854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:22.946385   36854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0621 18:46:22.946801   36854 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:22.947275   36854 main.go:141] libmachine: Using API Version  1
	I0621 18:46:22.947295   36854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:22.947627   36854 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:22.947838   36854 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:46:22.949638   36854 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:46:22.949652   36854 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:46:22.950012   36854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:22.950046   36854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:22.964554   36854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0621 18:46:22.964966   36854 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:22.965394   36854 main.go:141] libmachine: Using API Version  1
	I0621 18:46:22.965409   36854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:22.965720   36854 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:22.965903   36854 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:46:22.968773   36854 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:22.969405   36854 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:46:22.969432   36854 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:22.969513   36854 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:46:22.969792   36854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:22.969845   36854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:22.984804   36854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I0621 18:46:22.985352   36854 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:22.985943   36854 main.go:141] libmachine: Using API Version  1
	I0621 18:46:22.985965   36854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:22.986292   36854 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:22.986478   36854 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:46:22.986673   36854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:46:22.986691   36854 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:46:22.989408   36854 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:22.989881   36854 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:46:22.989921   36854 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:46:22.990048   36854 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:46:22.990215   36854 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:46:22.990353   36854 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:46:22.990480   36854 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:46:23.068789   36854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:46:23.082555   36854 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:46:23.082583   36854 api_server.go:166] Checking apiserver status ...
	I0621 18:46:23.082611   36854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:46:23.093900   36854 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:46:23.093930   36854 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:46:23.093942   36854 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:46:23.093960   36854 status.go:255] checking status of ha-406291-m03 ...
	I0621 18:46:23.094363   36854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:23.094403   36854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:23.109012   36854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40791
	I0621 18:46:23.109498   36854 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:23.109995   36854 main.go:141] libmachine: Using API Version  1
	I0621 18:46:23.110018   36854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:23.110506   36854 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:23.110706   36854 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:46:23.112304   36854 status.go:330] ha-406291-m03 host status = "Running" (err=<nil>)
	I0621 18:46:23.112334   36854 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:46:23.112613   36854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:23.112651   36854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:23.127067   36854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I0621 18:46:23.127520   36854 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:23.127974   36854 main.go:141] libmachine: Using API Version  1
	I0621 18:46:23.128001   36854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:23.128297   36854 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:23.128443   36854 main.go:141] libmachine: (ha-406291-m03) Calling .GetIP
	I0621 18:46:23.130933   36854 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:23.131356   36854 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:46:23.131395   36854 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:23.131546   36854 host.go:66] Checking if "ha-406291-m03" exists ...
	I0621 18:46:23.131887   36854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:23.131937   36854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:23.147427   36854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39913
	I0621 18:46:23.147819   36854 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:23.148298   36854 main.go:141] libmachine: Using API Version  1
	I0621 18:46:23.148317   36854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:23.148610   36854 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:23.148795   36854 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:46:23.149003   36854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:46:23.149031   36854 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:46:23.151512   36854 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:23.151893   36854 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:46:23.151932   36854 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:23.152074   36854 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:46:23.152259   36854 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:46:23.152400   36854 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:46:23.152522   36854 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:46:23.237607   36854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:46:23.252870   36854 status.go:257] ha-406291-m03 status: &{Name:ha-406291-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.161136266s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node stop m02 -v=7         | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node start m02 -v=7        | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.820709239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995583820687085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=675ad12f-4f68-4190-b2c3-f34e7f7fd28d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.821540605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=387e795a-78be-4a08-8061-ceaea82430eb name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.821609393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=387e795a-78be-4a08-8061-ceaea82430eb name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.821861165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=387e795a-78be-4a08-8061-ceaea82430eb name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.858277844Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f18f3f6-06db-43b1-a4b9-3b73b8deb7f9 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.858385584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f18f3f6-06db-43b1-a4b9-3b73b8deb7f9 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.859422817Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7b036dd-aa7a-4cf9-aa99-2371374af283 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.860429228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995583860396887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7b036dd-aa7a-4cf9-aa99-2371374af283 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.867738876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3bcfedd-bafa-4fec-9e52-7887cd92d52a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.867890465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3bcfedd-bafa-4fec-9e52-7887cd92d52a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.868226044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3bcfedd-bafa-4fec-9e52-7887cd92d52a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.908058831Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f13ae647-fdfe-4854-9724-811b44147371 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.908191995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f13ae647-fdfe-4854-9724-811b44147371 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.909544862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd10d369-9e46-4f15-91d2-b54b5dbd8825 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.910323271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995583910292523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd10d369-9e46-4f15-91d2-b54b5dbd8825 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.910810992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cc072a4-eb12-4303-9065-171d6b27963a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.910941731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cc072a4-eb12-4303-9065-171d6b27963a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.911283930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cc072a4-eb12-4303-9065-171d6b27963a name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.951911693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=946cf15c-ce80-44b6-b7c8-31c29989f789 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.951987662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=946cf15c-ce80-44b6-b7c8-31c29989f789 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.953649575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad779aaf-f6d6-4d69-a96a-8e317299312d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.954300889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995583954263702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad779aaf-f6d6-4d69-a96a-8e317299312d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.954827375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d608a629-6bcf-469b-9462-be9768cc53a0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.954905652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d608a629-6bcf-469b-9462-be9768cc53a0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:23 ha-406291 crio[679]: time="2024-06-21 18:46:23.955184880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d608a629-6bcf-469b-9462-be9768cc53a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   17 minutes ago      Running             busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      18 minutes ago      Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      18 minutes ago      Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     19 minutes ago      Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      19 minutes ago      Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      19 minutes ago      Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      19 minutes ago      Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	[INFO] 10.244.0.4:60864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000870211s
	[INFO] 10.244.0.4:49527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014553s
	[INFO] 10.244.0.4:59987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181145s
	[INFO] 10.244.0.4:59378 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009664502s
	[INFO] 10.244.0.4:59188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181625s
	[INFO] 10.244.0.4:33100 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137671s
	[INFO] 10.244.0.4:43551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129631s
	[INFO] 10.244.0.4:59759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152418s
	[INFO] 10.244.0.4:60292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090372s
	[INFO] 10.244.0.4:47967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093215s
	[INFO] 10.244.0.4:44642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175452s
	[INFO] 10.244.0.4:49677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070108s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	[INFO] 10.244.0.4:38404 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013105268s
	[INFO] 10.244.0.4:49299 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.225770527s
	[INFO] 10.244.0.4:41342 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010990835s
	[INFO] 10.244.0.4:55838 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003903098s
	[INFO] 10.244.0.4:59078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163236s
	[INFO] 10.244.0.4:39541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147137s
	[INFO] 10.244.0.4:47420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120366s
	[INFO] 10.244.0.4:54009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000255172s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:46:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:44:44 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:44:44 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:44:44 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:44:44 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                18m   kubelet          Node ha-406291 status is now: NodeReady
	
	
	Name:               ha-406291-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T18_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:46:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    ha-406291-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aeb6d6b65b246d89e229cf308cb4c9a
	  System UUID:                7aeb6d6b-65b2-46d8-9e22-9cf308cb4c9a
	  Boot ID:                    077bb108-4737-40c3-9892-3695b5a49d4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drm4v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-xrm6w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m23s
	  kube-system                 kube-proxy-vknv4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m23s (x2 over 5m23s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s (x2 over 5m23s)  kubelet          Node ha-406291-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s (x2 over 5m23s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-406291-m03 event: Registered Node ha-406291-m03 in Controller
	  Normal  NodeReady                5m14s                  kubelet          Node ha-406291-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:42:19.558621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1509}
	{"level":"info","ts":"2024-06-21T18:42:19.563203Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1509,"took":"4.232264ms","hash":4134822789,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2011136,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-06-21T18:42:19.563247Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4134822789,"revision":1509,"compact-revision":969}
	
	
	==> kernel <==
	 18:46:24 up 19 min,  0 users,  load average: 0.71, 0.30, 0.16
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:45:19.791468       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:45:29.797097       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:45:29.797324       1 main.go:227] handling current node
	I0621 18:45:29.797358       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:45:29.797419       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:45:39.801918       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:45:39.802012       1 main.go:227] handling current node
	I0621 18:45:39.802036       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:45:39.802052       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:45:49.814318       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:45:49.814403       1 main.go:227] handling current node
	I0621 18:45:49.814428       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:45:49.814433       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:45:59.819469       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:45:59.819500       1 main.go:227] handling current node
	I0621 18:45:59.819510       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:45:59.819515       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:46:09.827898       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:46:09.828096       1 main.go:227] handling current node
	I0621 18:46:09.828197       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:46:09.828225       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:46:19.840901       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:46:19.840942       1 main.go:227] handling current node
	I0621 18:46:19.840953       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:46:19.840958       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0621 18:40:26.217258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52318: use of closed network connection
	E0621 18:40:26.646809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52394: use of closed network connection
	E0621 18:40:27.039177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52460: use of closed network connection
	E0621 18:40:29.475531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0621 18:40:29.631306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52614: use of closed network connection
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:42:24 ha-406291 kubelet[1367]: E0621 18:42:24.484793    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:42:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:42:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:42:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:42:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:43:24 ha-406291 kubelet[1367]: E0621 18:43:24.483749    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:43:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:43:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:43:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:43:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:44:24 ha-406291 kubelet[1367]: E0621 18:44:24.483527    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:44:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:44:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:44:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:44:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:45:24 ha-406291 kubelet[1367]: E0621 18:45:24.484220    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:45:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:45:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:45:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:45:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:46:24 ha-406291 kubelet[1367]: E0621 18:46:24.483559    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:46:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:46:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:46:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:46:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  7m1s (x3 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  1s (x3 over 5m15s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (299.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:304: expected profile "ha-406291" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-406291\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-406291\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPor
t\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-406291\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.198\",\"Port\":8443,\"KubernetesVersion\":
\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.193\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false
,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMet
rics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-406291" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-406291\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-406291\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-406291\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.198\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.193\",\"Port\":0,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,
\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false
,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.121432455s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node stop m02 -v=7         | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node start m02 -v=7        | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:26:42.447747   30068 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:26:42.447858   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.447867   30068 out.go:304] Setting ErrFile to fd 2...
	I0621 18:26:42.447871   30068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:26:42.448064   30068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:26:42.448611   30068 out.go:298] Setting JSON to false
	I0621 18:26:42.449397   30068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4100,"bootTime":1718990302,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:26:42.449454   30068 start.go:139] virtualization: kvm guest
	I0621 18:26:42.451750   30068 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:26:42.453097   30068 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:26:42.453116   30068 notify.go:220] Checking for updates...
	I0621 18:26:42.456195   30068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:26:42.457398   30068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:26:42.458579   30068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.459798   30068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:26:42.461088   30068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:26:42.462525   30068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:26:42.497263   30068 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 18:26:42.498734   30068 start.go:297] selected driver: kvm2
	I0621 18:26:42.498753   30068 start.go:901] validating driver "kvm2" against <nil>
	I0621 18:26:42.498763   30068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:26:42.499421   30068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.499483   30068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:26:42.513772   30068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:26:42.513840   30068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 18:26:42.514036   30068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:26:42.514063   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:26:42.514070   30068 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0621 18:26:42.514080   30068 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0621 18:26:42.514119   30068 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0621 18:26:42.514203   30068 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:26:42.515839   30068 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:26:42.516925   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:26:42.516952   30068 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:26:42.516960   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:26:42.517025   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:26:42.517035   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:26:42.517302   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:26:42.517325   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json: {Name:mkd43eceea282503c79b6e4b90bbf7258fcf8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:26:42.517445   30068 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:26:42.517470   30068 start.go:364] duration metric: took 13.314µs to acquireMachinesLock for "ha-406291"
	I0621 18:26:42.517485   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:26:42.517531   30068 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 18:26:42.518937   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:26:42.519071   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:26:42.519109   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:26:42.533235   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0621 18:26:42.533669   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:26:42.534312   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:26:42.534360   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:26:42.534665   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:26:42.534880   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:26:42.535018   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:26:42.535180   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:26:42.535209   30068 client.go:168] LocalClient.Create starting
	I0621 18:26:42.535233   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:26:42.535267   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535282   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535339   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:26:42.535357   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:26:42.535367   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:26:42.535383   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:26:42.535396   30068 main.go:141] libmachine: (ha-406291) Calling .PreCreateCheck
	I0621 18:26:42.535734   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:26:42.536101   30068 main.go:141] libmachine: Creating machine...
	I0621 18:26:42.536113   30068 main.go:141] libmachine: (ha-406291) Calling .Create
	I0621 18:26:42.536232   30068 main.go:141] libmachine: (ha-406291) Creating KVM machine...
	I0621 18:26:42.537484   30068 main.go:141] libmachine: (ha-406291) DBG | found existing default KVM network
	I0621 18:26:42.538310   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.538153   30091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0621 18:26:42.538339   30068 main.go:141] libmachine: (ha-406291) DBG | created network xml: 
	I0621 18:26:42.538346   30068 main.go:141] libmachine: (ha-406291) DBG | <network>
	I0621 18:26:42.538355   30068 main.go:141] libmachine: (ha-406291) DBG |   <name>mk-ha-406291</name>
	I0621 18:26:42.538371   30068 main.go:141] libmachine: (ha-406291) DBG |   <dns enable='no'/>
	I0621 18:26:42.538385   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538392   30068 main.go:141] libmachine: (ha-406291) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0621 18:26:42.538400   30068 main.go:141] libmachine: (ha-406291) DBG |     <dhcp>
	I0621 18:26:42.538412   30068 main.go:141] libmachine: (ha-406291) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0621 18:26:42.538421   30068 main.go:141] libmachine: (ha-406291) DBG |     </dhcp>
	I0621 18:26:42.538439   30068 main.go:141] libmachine: (ha-406291) DBG |   </ip>
	I0621 18:26:42.538451   30068 main.go:141] libmachine: (ha-406291) DBG |   
	I0621 18:26:42.538458   30068 main.go:141] libmachine: (ha-406291) DBG | </network>
	I0621 18:26:42.538470   30068 main.go:141] libmachine: (ha-406291) DBG | 
	I0621 18:26:42.543401   30068 main.go:141] libmachine: (ha-406291) DBG | trying to create private KVM network mk-ha-406291 192.168.39.0/24...
	I0621 18:26:42.606041   30068 main.go:141] libmachine: (ha-406291) DBG | private KVM network mk-ha-406291 192.168.39.0/24 created
	I0621 18:26:42.606072   30068 main.go:141] libmachine: (ha-406291) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.606091   30068 main.go:141] libmachine: (ha-406291) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:26:42.606165   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.606075   30091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.606280   30068 main.go:141] libmachine: (ha-406291) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:26:42.829374   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.829262   30091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa...
	I0621 18:26:42.941790   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941666   30091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk...
	I0621 18:26:42.941834   30068 main.go:141] libmachine: (ha-406291) DBG | Writing magic tar header
	I0621 18:26:42.941844   30068 main.go:141] libmachine: (ha-406291) DBG | Writing SSH key tar header
	I0621 18:26:42.941852   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:42.941778   30091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 ...
	I0621 18:26:42.941909   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291
	I0621 18:26:42.941989   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291 (perms=drwx------)
	I0621 18:26:42.942007   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:26:42.942019   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:26:42.942033   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:26:42.942053   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:26:42.942060   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:26:42.942069   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:26:42.942075   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:26:42.942080   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:26:42.942088   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:26:42.942104   30068 main.go:141] libmachine: (ha-406291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:26:42.942117   30068 main.go:141] libmachine: (ha-406291) DBG | Checking permissions on dir: /home
	I0621 18:26:42.942128   30068 main.go:141] libmachine: (ha-406291) DBG | Skipping /home - not owner
	I0621 18:26:42.942142   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:42.943154   30068 main.go:141] libmachine: (ha-406291) define libvirt domain using xml: 
	I0621 18:26:42.943176   30068 main.go:141] libmachine: (ha-406291) <domain type='kvm'>
	I0621 18:26:42.943183   30068 main.go:141] libmachine: (ha-406291)   <name>ha-406291</name>
	I0621 18:26:42.943188   30068 main.go:141] libmachine: (ha-406291)   <memory unit='MiB'>2200</memory>
	I0621 18:26:42.943199   30068 main.go:141] libmachine: (ha-406291)   <vcpu>2</vcpu>
	I0621 18:26:42.943203   30068 main.go:141] libmachine: (ha-406291)   <features>
	I0621 18:26:42.943208   30068 main.go:141] libmachine: (ha-406291)     <acpi/>
	I0621 18:26:42.943212   30068 main.go:141] libmachine: (ha-406291)     <apic/>
	I0621 18:26:42.943217   30068 main.go:141] libmachine: (ha-406291)     <pae/>
	I0621 18:26:42.943223   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943229   30068 main.go:141] libmachine: (ha-406291)   </features>
	I0621 18:26:42.943234   30068 main.go:141] libmachine: (ha-406291)   <cpu mode='host-passthrough'>
	I0621 18:26:42.943255   30068 main.go:141] libmachine: (ha-406291)   
	I0621 18:26:42.943266   30068 main.go:141] libmachine: (ha-406291)   </cpu>
	I0621 18:26:42.943284   30068 main.go:141] libmachine: (ha-406291)   <os>
	I0621 18:26:42.943318   30068 main.go:141] libmachine: (ha-406291)     <type>hvm</type>
	I0621 18:26:42.943328   30068 main.go:141] libmachine: (ha-406291)     <boot dev='cdrom'/>
	I0621 18:26:42.943333   30068 main.go:141] libmachine: (ha-406291)     <boot dev='hd'/>
	I0621 18:26:42.943341   30068 main.go:141] libmachine: (ha-406291)     <bootmenu enable='no'/>
	I0621 18:26:42.943345   30068 main.go:141] libmachine: (ha-406291)   </os>
	I0621 18:26:42.943355   30068 main.go:141] libmachine: (ha-406291)   <devices>
	I0621 18:26:42.943360   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='cdrom'>
	I0621 18:26:42.943371   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/boot2docker.iso'/>
	I0621 18:26:42.943384   30068 main.go:141] libmachine: (ha-406291)       <target dev='hdc' bus='scsi'/>
	I0621 18:26:42.943397   30068 main.go:141] libmachine: (ha-406291)       <readonly/>
	I0621 18:26:42.943404   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943417   30068 main.go:141] libmachine: (ha-406291)     <disk type='file' device='disk'>
	I0621 18:26:42.943429   30068 main.go:141] libmachine: (ha-406291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:26:42.943445   30068 main.go:141] libmachine: (ha-406291)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/ha-406291.rawdisk'/>
	I0621 18:26:42.943456   30068 main.go:141] libmachine: (ha-406291)       <target dev='hda' bus='virtio'/>
	I0621 18:26:42.943478   30068 main.go:141] libmachine: (ha-406291)     </disk>
	I0621 18:26:42.943499   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943509   30068 main.go:141] libmachine: (ha-406291)       <source network='mk-ha-406291'/>
	I0621 18:26:42.943513   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943519   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943526   30068 main.go:141] libmachine: (ha-406291)     <interface type='network'>
	I0621 18:26:42.943532   30068 main.go:141] libmachine: (ha-406291)       <source network='default'/>
	I0621 18:26:42.943539   30068 main.go:141] libmachine: (ha-406291)       <model type='virtio'/>
	I0621 18:26:42.943544   30068 main.go:141] libmachine: (ha-406291)     </interface>
	I0621 18:26:42.943549   30068 main.go:141] libmachine: (ha-406291)     <serial type='pty'>
	I0621 18:26:42.943554   30068 main.go:141] libmachine: (ha-406291)       <target port='0'/>
	I0621 18:26:42.943560   30068 main.go:141] libmachine: (ha-406291)     </serial>
	I0621 18:26:42.943565   30068 main.go:141] libmachine: (ha-406291)     <console type='pty'>
	I0621 18:26:42.943571   30068 main.go:141] libmachine: (ha-406291)       <target type='serial' port='0'/>
	I0621 18:26:42.943583   30068 main.go:141] libmachine: (ha-406291)     </console>
	I0621 18:26:42.943593   30068 main.go:141] libmachine: (ha-406291)     <rng model='virtio'>
	I0621 18:26:42.943602   30068 main.go:141] libmachine: (ha-406291)       <backend model='random'>/dev/random</backend>
	I0621 18:26:42.943609   30068 main.go:141] libmachine: (ha-406291)     </rng>
	I0621 18:26:42.943617   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943621   30068 main.go:141] libmachine: (ha-406291)     
	I0621 18:26:42.943627   30068 main.go:141] libmachine: (ha-406291)   </devices>
	I0621 18:26:42.943631   30068 main.go:141] libmachine: (ha-406291) </domain>
	I0621 18:26:42.943638   30068 main.go:141] libmachine: (ha-406291) 
	I0621 18:26:42.948298   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:44:10:c4 in network default
	I0621 18:26:42.948968   30068 main.go:141] libmachine: (ha-406291) Ensuring networks are active...
	I0621 18:26:42.948988   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:42.949710   30068 main.go:141] libmachine: (ha-406291) Ensuring network default is active
	I0621 18:26:42.950033   30068 main.go:141] libmachine: (ha-406291) Ensuring network mk-ha-406291 is active
	I0621 18:26:42.950493   30068 main.go:141] libmachine: (ha-406291) Getting domain xml...
	I0621 18:26:42.951151   30068 main.go:141] libmachine: (ha-406291) Creating domain...
	I0621 18:26:44.128421   30068 main.go:141] libmachine: (ha-406291) Waiting to get IP...
	I0621 18:26:44.129183   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.129530   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.129550   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.129513   30091 retry.go:31] will retry after 273.280189ms: waiting for machine to come up
	I0621 18:26:44.404590   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.405440   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.405467   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.405386   30091 retry.go:31] will retry after 363.287979ms: waiting for machine to come up
	I0621 18:26:44.769749   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:44.770188   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:44.770217   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:44.770146   30091 retry.go:31] will retry after 445.9009ms: waiting for machine to come up
	I0621 18:26:45.217708   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.218113   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.218132   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.218075   30091 retry.go:31] will retry after 497.769852ms: waiting for machine to come up
	I0621 18:26:45.717913   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:45.718380   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:45.718402   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:45.718333   30091 retry.go:31] will retry after 609.412902ms: waiting for machine to come up
	I0621 18:26:46.329589   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:46.330043   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:46.330077   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:46.330033   30091 retry.go:31] will retry after 668.226784ms: waiting for machine to come up
	I0621 18:26:46.999851   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.000352   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.000399   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.000310   30091 retry.go:31] will retry after 928.90777ms: waiting for machine to come up
	I0621 18:26:47.931043   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:47.931568   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:47.931598   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:47.931527   30091 retry.go:31] will retry after 1.407643188s: waiting for machine to come up
	I0621 18:26:49.341126   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:49.341529   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:49.341557   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:49.341489   30091 retry.go:31] will retry after 1.657120945s: waiting for machine to come up
	I0621 18:26:51.001518   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:51.001999   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:51.002022   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:51.001955   30091 retry.go:31] will retry after 1.506025988s: waiting for machine to come up
	I0621 18:26:52.509823   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:52.510314   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:52.510342   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:52.510269   30091 retry.go:31] will retry after 2.859818514s: waiting for machine to come up
	I0621 18:26:55.371181   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:55.371726   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:55.371755   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:55.371678   30091 retry.go:31] will retry after 3.374080501s: waiting for machine to come up
	I0621 18:26:58.747494   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:26:58.748019   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find current IP address of domain ha-406291 in network mk-ha-406291
	I0621 18:26:58.748039   30068 main.go:141] libmachine: (ha-406291) DBG | I0621 18:26:58.747991   30091 retry.go:31] will retry after 4.386740875s: waiting for machine to come up
	I0621 18:27:03.136546   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137046   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has current primary IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.137063   30068 main.go:141] libmachine: (ha-406291) Found IP for machine: 192.168.39.198
	I0621 18:27:03.137079   30068 main.go:141] libmachine: (ha-406291) Reserving static IP address...
	I0621 18:27:03.137427   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "ha-406291", mac: "52:54:00:38:dc:46", ip: "192.168.39.198"} in network mk-ha-406291
	I0621 18:27:03.211473   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:03.211506   30068 main.go:141] libmachine: (ha-406291) Reserved static IP address: 192.168.39.198
	I0621 18:27:03.211519   30068 main.go:141] libmachine: (ha-406291) Waiting for SSH to be available...
	I0621 18:27:03.214029   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:03.214477   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291
	I0621 18:27:03.214509   30068 main.go:141] libmachine: (ha-406291) DBG | unable to find defined IP address of network mk-ha-406291 interface with MAC address 52:54:00:38:dc:46
	I0621 18:27:03.214661   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:03.214702   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:03.214745   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:03.214771   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:03.214784   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:03.218578   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: exit status 255: 
	I0621 18:27:03.218603   30068 main.go:141] libmachine: (ha-406291) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0621 18:27:03.218614   30068 main.go:141] libmachine: (ha-406291) DBG | command : exit 0
	I0621 18:27:03.218630   30068 main.go:141] libmachine: (ha-406291) DBG | err     : exit status 255
	I0621 18:27:03.218643   30068 main.go:141] libmachine: (ha-406291) DBG | output  : 
	I0621 18:27:06.220803   30068 main.go:141] libmachine: (ha-406291) DBG | Getting to WaitForSSH function...
	I0621 18:27:06.223287   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223552   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.223591   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.223725   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH client type: external
	I0621 18:27:06.223751   30068 main.go:141] libmachine: (ha-406291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa (-rw-------)
	I0621 18:27:06.223775   30068 main.go:141] libmachine: (ha-406291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:06.223788   30068 main.go:141] libmachine: (ha-406291) DBG | About to run SSH command:
	I0621 18:27:06.223797   30068 main.go:141] libmachine: (ha-406291) DBG | exit 0
	I0621 18:27:06.345962   30068 main.go:141] libmachine: (ha-406291) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:06.346198   30068 main.go:141] libmachine: (ha-406291) KVM machine creation complete!
	I0621 18:27:06.346530   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:06.347151   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347376   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:06.347539   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:06.347553   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:06.349257   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:06.349272   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:06.349278   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:06.349284   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.351365   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351709   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.351738   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.351848   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.352053   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352215   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.352441   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.352676   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.352926   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.352939   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:06.449038   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.449066   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:06.449077   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.451811   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452202   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.452223   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.452405   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.452602   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452762   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.452898   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.453074   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.453321   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.453334   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:06.550539   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:06.550611   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:06.550618   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:06.550625   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.550871   30068 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:27:06.550891   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.551068   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.553701   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554112   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.554138   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.554279   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.554452   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554601   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.554725   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.554869   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.555029   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.555040   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:27:06.664012   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:27:06.664038   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.666600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.666923   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.666952   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.667091   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.667277   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667431   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.667559   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.667745   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:06.667932   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:06.667949   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:06.778156   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:06.778199   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:06.778224   30068 buildroot.go:174] setting up certificates
	I0621 18:27:06.778237   30068 provision.go:84] configureAuth start
	I0621 18:27:06.778250   30068 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:27:06.778526   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:06.781267   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781583   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.781610   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.781773   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.784225   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784546   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.784564   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.784717   30068 provision.go:143] copyHostCerts
	I0621 18:27:06.784747   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784796   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:06.784813   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:06.784893   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:06.784992   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785017   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:06.785023   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:06.785064   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:06.785126   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785153   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:06.785162   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:06.785194   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:06.785257   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:27:06.904910   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:06.904976   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:06.905004   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:06.907600   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.907883   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:06.907916   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:06.908115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:06.908308   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:06.908462   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:06.908599   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:06.987463   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:06.987540   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:07.009572   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:07.009661   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:27:07.031219   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:07.031333   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:07.052682   30068 provision.go:87] duration metric: took 274.433059ms to configureAuth
	I0621 18:27:07.052709   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:07.052895   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:07.052984   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.055368   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055720   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.055742   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.055971   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.056161   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056324   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.056453   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.056615   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.056785   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.056814   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:07.307055   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:07.307083   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:07.307105   30068 main.go:141] libmachine: (ha-406291) Calling .GetURL
	I0621 18:27:07.308373   30068 main.go:141] libmachine: (ha-406291) DBG | Using libvirt version 6000000
	I0621 18:27:07.310322   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310631   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.310658   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.310756   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:07.310768   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:07.310774   30068 client.go:171] duration metric: took 24.775558818s to LocalClient.Create
	I0621 18:27:07.310795   30068 start.go:167] duration metric: took 24.775614868s to libmachine.API.Create "ha-406291"
	I0621 18:27:07.310807   30068 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:27:07.310818   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:07.310835   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.311186   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:07.311208   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.313308   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313543   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.313581   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.313682   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.313855   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.314042   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.314209   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.391859   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:07.396062   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:07.396083   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:07.396132   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:07.396193   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:07.396202   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:07.396289   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:07.405435   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:07.427927   30068 start.go:296] duration metric: took 117.075834ms for postStartSetup
	I0621 18:27:07.427984   30068 main.go:141] libmachine: (ha-406291) Calling .GetConfigRaw
	I0621 18:27:07.428562   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.431157   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431479   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.431523   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.431791   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:07.431969   30068 start.go:128] duration metric: took 24.914429669s to createHost
	I0621 18:27:07.431990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.434121   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434421   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.434445   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.434510   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.434692   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.434865   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.435009   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.435168   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:07.435372   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:27:07.435384   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:07.530141   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994427.508226463
	
	I0621 18:27:07.530165   30068 fix.go:216] guest clock: 1718994427.508226463
	I0621 18:27:07.530173   30068 fix.go:229] Guest: 2024-06-21 18:27:07.508226463 +0000 UTC Remote: 2024-06-21 18:27:07.431981059 +0000 UTC m=+25.016949864 (delta=76.245404ms)
	I0621 18:27:07.530199   30068 fix.go:200] guest clock delta is within tolerance: 76.245404ms
	I0621 18:27:07.530204   30068 start.go:83] releasing machines lock for "ha-406291", held for 25.012726918s
	I0621 18:27:07.530222   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.530466   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:07.532753   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533110   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.533151   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.533275   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533702   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533877   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:07.533978   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:07.534028   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.534087   30068 ssh_runner.go:195] Run: cat /version.json
	I0621 18:27:07.534115   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:07.536489   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536798   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.536828   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536845   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.536983   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537154   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537312   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:07.537330   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:07.537337   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537509   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:07.537507   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.537675   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:07.537830   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:07.537968   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:07.610886   30068 ssh_runner.go:195] Run: systemctl --version
	I0621 18:27:07.648150   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:07.798080   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:27:07.803683   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:27:07.803731   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:27:07.820345   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:27:07.820363   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:27:07.820412   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:27:07.835960   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:27:07.849269   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:27:07.849324   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:27:07.861858   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:27:07.874371   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:27:07.984965   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:27:08.126897   30068 docker.go:233] disabling docker service ...
	I0621 18:27:08.126973   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:27:08.140294   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:27:08.152460   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:27:08.289101   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:27:08.414578   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:27:08.428193   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:27:08.445335   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:27:08.445406   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.454715   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:27:08.454780   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.464286   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.473688   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.483215   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:27:08.492907   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.502386   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.518138   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:27:08.527822   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:27:08.536491   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:27:08.536537   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:27:08.548343   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:27:08.557395   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:08.668782   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:27:08.793146   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:27:08.793228   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:27:08.797886   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:27:08.797933   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:27:08.801183   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:27:08.838953   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:27:08.839028   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.865047   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:27:08.892059   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:27:08.893365   30068 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:27:08.895801   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896174   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:08.896198   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:08.896377   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:27:08.900124   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:08.912152   30068 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:27:08.912252   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:08.912299   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:08.941267   30068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0621 18:27:08.941328   30068 ssh_runner.go:195] Run: which lz4
	I0621 18:27:08.944757   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0621 18:27:08.944843   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0621 18:27:08.948482   30068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 18:27:08.948507   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0621 18:27:10.186487   30068 crio.go:462] duration metric: took 1.241671996s to copy over tarball
	I0621 18:27:10.186568   30068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 18:27:12.219224   30068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.032622286s)
	I0621 18:27:12.219256   30068 crio.go:469] duration metric: took 2.032747658s to extract the tarball
	I0621 18:27:12.219265   30068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 18:27:12.255526   30068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:27:12.297692   30068 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:27:12.297715   30068 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:27:12.297725   30068 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:27:12.297863   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:27:12.297956   30068 ssh_runner.go:195] Run: crio config
	I0621 18:27:12.347243   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:12.347276   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:12.347288   30068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:27:12.347314   30068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:27:12.347487   30068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:27:12.347514   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:27:12.347563   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:27:12.362180   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:27:12.362273   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:27:12.362316   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:27:12.371448   30068 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:27:12.371499   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:27:12.380031   30068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:27:12.395354   30068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:27:12.410533   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:27:12.425474   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0621 18:27:12.440059   30068 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:27:12.443523   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:27:12.454828   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:27:12.572486   30068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:27:12.589057   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:27:12.589078   30068 certs.go:194] generating shared ca certs ...
	I0621 18:27:12.589095   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.589221   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:27:12.589272   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:27:12.589282   30068 certs.go:256] generating profile certs ...
	I0621 18:27:12.589333   30068 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:27:12.589346   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt with IP's: []
	I0621 18:27:12.759863   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt ...
	I0621 18:27:12.759890   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt: {Name:mk1350197087e6f37ca28e80a43c199beace4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760090   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key ...
	I0621 18:27:12.760104   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key: {Name:mk90994b992a268304b337419707e3332d3f039a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:12.760206   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92
	I0621 18:27:12.760222   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.254]
	I0621 18:27:13.132336   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 ...
	I0621 18:27:13.132362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92: {Name:mke7daa70ff2d7bf8fa87eea51b1ed6731c0dd6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132530   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 ...
	I0621 18:27:13.132546   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92: {Name:mk310235904dba1c4db66ef73b8dcc06ff030051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.132647   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:27:13.132737   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.54585d92 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:27:13.132790   30068 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:27:13.132806   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt with IP's: []
	I0621 18:27:13.317891   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt ...
	I0621 18:27:13.317927   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt: {Name:mk5e450ef3633fa54e81eaeb94f9408c94729912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318119   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key ...
	I0621 18:27:13.318132   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key: {Name:mk3a1443924b05c36251566d5313d0eeb467e0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:13.318220   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:27:13.318241   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:27:13.318251   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:27:13.318264   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:27:13.318274   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:27:13.318290   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:27:13.318302   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:27:13.318314   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:27:13.318363   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:27:13.318396   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:27:13.318406   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:27:13.318428   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:27:13.318449   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:27:13.318469   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:27:13.318506   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:13.318531   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.318544   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.318556   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.319121   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:27:13.345382   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:27:13.379289   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:27:13.406853   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:27:13.430624   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 18:27:13.452498   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 18:27:13.474381   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:27:13.497475   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:27:13.520548   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:27:13.543849   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:27:13.569722   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:27:13.594191   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:27:13.611312   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:27:13.616881   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:27:13.627054   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631162   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.631214   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:27:13.636845   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:27:13.648132   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:27:13.658846   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663074   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.663140   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:27:13.668358   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:27:13.678369   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:27:13.688293   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692517   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.692581   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:27:13.697837   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:27:13.707967   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:27:13.711761   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:27:13.711821   30068 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:27:13.711887   30068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:27:13.711960   30068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:27:13.752929   30068 cri.go:89] found id: ""
	I0621 18:27:13.753017   30068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 18:27:13.762514   30068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 18:27:13.771612   30068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 18:27:13.781740   30068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 18:27:13.781758   30068 kubeadm.go:156] found existing configuration files:
	
	I0621 18:27:13.781811   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 18:27:13.790876   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 18:27:13.790943   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 18:27:13.800011   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 18:27:13.809117   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 18:27:13.809168   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 18:27:13.818279   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.827522   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 18:27:13.827584   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 18:27:13.836671   30068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 18:27:13.845242   30068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 18:27:13.845298   30068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 18:27:13.854365   30068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 18:27:13.951888   30068 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0621 18:27:13.951970   30068 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 18:27:14.081675   30068 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 18:27:14.081845   30068 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 18:27:14.081983   30068 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 18:27:14.292951   30068 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 18:27:14.423174   30068 out.go:204]   - Generating certificates and keys ...
	I0621 18:27:14.423287   30068 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 18:27:14.423355   30068 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 18:27:14.524306   30068 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 18:27:14.693249   30068 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 18:27:14.771462   30068 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 18:27:14.965492   30068 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 18:27:15.095342   30068 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 18:27:15.095646   30068 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.247328   30068 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 18:27:15.247729   30068 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-406291 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I0621 18:27:15.326656   30068 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 18:27:15.470979   30068 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 18:27:15.620090   30068 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 18:27:15.620402   30068 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 18:27:15.715693   30068 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 18:27:16.259484   30068 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0621 18:27:16.704626   30068 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 18:27:16.836633   30068 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 18:27:16.996818   30068 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 18:27:16.997517   30068 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 18:27:16.999949   30068 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 18:27:17.001874   30068 out.go:204]   - Booting up control plane ...
	I0621 18:27:17.001982   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 18:27:17.002874   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 18:27:17.003729   30068 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 18:27:17.018894   30068 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 18:27:17.019816   30068 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 18:27:17.019944   30068 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 18:27:17.138099   30068 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0621 18:27:17.138195   30068 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0621 18:27:17.639115   30068 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.282189ms
	I0621 18:27:17.639214   30068 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0621 18:27:23.502026   30068 kubeadm.go:309] [api-check] The API server is healthy after 5.864418149s
	I0621 18:27:23.512938   30068 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0621 18:27:23.528670   30068 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0621 18:27:24.059886   30068 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0621 18:27:24.060060   30068 kubeadm.go:309] [mark-control-plane] Marking the node ha-406291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0621 18:27:24.071607   30068 kubeadm.go:309] [bootstrap-token] Using token: ha2utu.p9k0bq1xsr5791t7
	I0621 18:27:24.073185   30068 out.go:204]   - Configuring RBAC rules ...
	I0621 18:27:24.073336   30068 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0621 18:27:24.084336   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0621 18:27:24.092265   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0621 18:27:24.096415   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0621 18:27:24.101175   30068 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0621 18:27:24.104689   30068 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0621 18:27:24.121568   30068 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0621 18:27:24.349610   30068 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0621 18:27:24.907607   30068 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0621 18:27:24.908452   30068 kubeadm.go:309] 
	I0621 18:27:24.908529   30068 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0621 18:27:24.908541   30068 kubeadm.go:309] 
	I0621 18:27:24.908607   30068 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0621 18:27:24.908645   30068 kubeadm.go:309] 
	I0621 18:27:24.908698   30068 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0621 18:27:24.908780   30068 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0621 18:27:24.908863   30068 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0621 18:27:24.908873   30068 kubeadm.go:309] 
	I0621 18:27:24.908975   30068 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0621 18:27:24.908993   30068 kubeadm.go:309] 
	I0621 18:27:24.909038   30068 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0621 18:27:24.909045   30068 kubeadm.go:309] 
	I0621 18:27:24.909086   30068 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0621 18:27:24.909160   30068 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0621 18:27:24.909256   30068 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0621 18:27:24.909274   30068 kubeadm.go:309] 
	I0621 18:27:24.909401   30068 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0621 18:27:24.909522   30068 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0621 18:27:24.909544   30068 kubeadm.go:309] 
	I0621 18:27:24.909671   30068 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.909771   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df \
	I0621 18:27:24.909810   30068 kubeadm.go:309] 	--control-plane 
	I0621 18:27:24.909824   30068 kubeadm.go:309] 
	I0621 18:27:24.909898   30068 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0621 18:27:24.909904   30068 kubeadm.go:309] 
	I0621 18:27:24.909977   30068 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ha2utu.p9k0bq1xsr5791t7 \
	I0621 18:27:24.910064   30068 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:25b189dd8842da29004c6e91dd5dbce76990a035c20bc2914c46f3371e3a47df 
	I0621 18:27:24.910664   30068 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 18:27:24.910700   30068 cni.go:84] Creating CNI manager for ""
	I0621 18:27:24.910708   30068 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0621 18:27:24.912398   30068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0621 18:27:24.913676   30068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0621 18:27:24.919660   30068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0621 18:27:24.919677   30068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0621 18:27:24.938734   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0621 18:27:25.303975   30068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 18:27:25.304070   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.304073   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406291 minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600 minikube.k8s.io/name=ha-406291 minikube.k8s.io/primary=true
	I0621 18:27:25.334777   30068 ops.go:34] apiserver oom_adj: -16
	I0621 18:27:25.436873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:25.937461   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.436991   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:26.937206   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.437152   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:27.937860   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.437177   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:28.937036   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.437007   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:29.937140   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.437060   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:30.937199   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.437695   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:31.937675   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.437034   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:32.937808   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.437793   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:33.937401   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.437307   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:34.937172   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.437428   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:35.937146   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.436951   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:36.937873   30068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0621 18:27:37.039583   30068 kubeadm.go:1107] duration metric: took 11.735587948s to wait for elevateKubeSystemPrivileges
	W0621 18:27:37.039626   30068 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0621 18:27:37.039635   30068 kubeadm.go:393] duration metric: took 23.327819322s to StartCluster
	I0621 18:27:37.039654   30068 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.039737   30068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.040362   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:27:37.040584   30068 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.040604   30068 start.go:240] waiting for startup goroutines ...
	I0621 18:27:37.040603   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0621 18:27:37.040612   30068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 18:27:37.040669   30068 addons.go:69] Setting storage-provisioner=true in profile "ha-406291"
	I0621 18:27:37.040677   30068 addons.go:69] Setting default-storageclass=true in profile "ha-406291"
	I0621 18:27:37.040699   30068 addons.go:234] Setting addon storage-provisioner=true in "ha-406291"
	I0621 18:27:37.040730   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.040700   30068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406291"
	I0621 18:27:37.040772   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.041052   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041075   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.041146   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.041174   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.055583   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0621 18:27:37.056062   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.056549   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.056570   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.056894   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.057371   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.057399   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.061343   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0621 18:27:37.061846   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.062393   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.062418   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.062721   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.062885   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.065021   30068 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:27:37.065351   30068 kapi.go:59] client config for ha-406291: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 18:27:37.065825   30068 cert_rotation.go:137] Starting client certificate rotation controller
	I0621 18:27:37.066065   30068 addons.go:234] Setting addon default-storageclass=true in "ha-406291"
	I0621 18:27:37.066106   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:27:37.066471   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.066512   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.072759   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0621 18:27:37.073274   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.073791   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.073819   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.074169   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.074346   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.076096   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.078312   30068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 18:27:37.079815   30068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.079840   30068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 18:27:37.079864   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.081896   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0621 18:27:37.082293   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.082859   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.082878   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.083163   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083202   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.083607   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.083648   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.083733   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.083752   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.083817   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.083990   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.084135   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.084288   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.103512   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0621 18:27:37.103937   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.104456   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.104473   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.104853   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.105052   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:27:37.106976   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:27:37.107211   30068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.107231   30068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 18:27:37.107252   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:27:37.110295   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110729   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:27:37.110755   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:27:37.110870   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:27:37.111030   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:27:37.111197   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:27:37.111314   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:27:37.137868   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0621 18:27:37.228739   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 18:27:37.290397   30068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 18:27:37.684619   30068 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0621 18:27:37.902862   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902882   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.902957   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.902988   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903179   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903194   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903203   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903210   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903287   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903300   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903312   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903321   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.903328   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.903474   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903485   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903513   30068 main.go:141] libmachine: (ha-406291) DBG | Closing plugin on server side
	I0621 18:27:37.903578   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.903595   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.903740   30068 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0621 18:27:37.903767   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.903778   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.903784   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.922164   30068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0621 18:27:37.922691   30068 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0621 18:27:37.922706   30068 round_trippers.go:469] Request Headers:
	I0621 18:27:37.922713   30068 round_trippers.go:473]     Accept: application/json, */*
	I0621 18:27:37.922718   30068 round_trippers.go:473]     Content-Type: application/json
	I0621 18:27:37.922720   30068 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0621 18:27:37.926249   30068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0621 18:27:37.926491   30068 main.go:141] libmachine: Making call to close driver server
	I0621 18:27:37.926512   30068 main.go:141] libmachine: (ha-406291) Calling .Close
	I0621 18:27:37.926731   30068 main.go:141] libmachine: Successfully made call to close driver server
	I0621 18:27:37.926748   30068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 18:27:37.928515   30068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0621 18:27:37.930095   30068 addons.go:510] duration metric: took 889.47949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0621 18:27:37.930127   30068 start.go:245] waiting for cluster config update ...
	I0621 18:27:37.930137   30068 start.go:254] writing updated cluster config ...
	I0621 18:27:37.931687   30068 out.go:177] 
	I0621 18:27:37.933039   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:37.933102   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.934716   30068 out.go:177] * Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	I0621 18:27:37.935953   30068 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:27:37.935970   30068 cache.go:56] Caching tarball of preloaded images
	I0621 18:27:37.936052   30068 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:27:37.936063   30068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:27:37.936142   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:37.936325   30068 start.go:360] acquireMachinesLock for ha-406291-m02: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:27:37.936370   30068 start.go:364] duration metric: took 24.972µs to acquireMachinesLock for "ha-406291-m02"
	I0621 18:27:37.936392   30068 start.go:93] Provisioning new machine with config: &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 18:27:37.936481   30068 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0621 18:27:37.938349   30068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 18:27:37.938428   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:27:37.938450   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:27:37.952767   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0621 18:27:37.953176   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:27:37.953649   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:27:37.953669   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:27:37.953963   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:27:37.954162   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:37.954301   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:37.954431   30068 start.go:159] libmachine.API.Create for "ha-406291" (driver="kvm2")
	I0621 18:27:37.954456   30068 client.go:168] LocalClient.Create starting
	I0621 18:27:37.954488   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 18:27:37.954518   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954538   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954589   30068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 18:27:37.954607   30068 main.go:141] libmachine: Decoding PEM data...
	I0621 18:27:37.954621   30068 main.go:141] libmachine: Parsing certificate...
	I0621 18:27:37.954636   30068 main.go:141] libmachine: Running pre-create checks...
	I0621 18:27:37.954644   30068 main.go:141] libmachine: (ha-406291-m02) Calling .PreCreateCheck
	I0621 18:27:37.954836   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:37.955238   30068 main.go:141] libmachine: Creating machine...
	I0621 18:27:37.955253   30068 main.go:141] libmachine: (ha-406291-m02) Calling .Create
	I0621 18:27:37.955404   30068 main.go:141] libmachine: (ha-406291-m02) Creating KVM machine...
	I0621 18:27:37.956748   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing default KVM network
	I0621 18:27:37.956951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found existing private KVM network mk-ha-406291
	I0621 18:27:37.957069   30068 main.go:141] libmachine: (ha-406291-m02) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:37.957091   30068 main.go:141] libmachine: (ha-406291-m02) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 18:27:37.957139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:37.957062   30460 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:37.957278   30068 main.go:141] libmachine: (ha-406291-m02) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 18:27:38.178433   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.178291   30460 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa...
	I0621 18:27:38.322659   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322470   30460 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk...
	I0621 18:27:38.322709   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 (perms=drwx------)
	I0621 18:27:38.322719   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing magic tar header
	I0621 18:27:38.322734   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Writing SSH key tar header
	I0621 18:27:38.322745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:38.322583   30460 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02 ...
	I0621 18:27:38.322758   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02
	I0621 18:27:38.322822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 18:27:38.322839   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 18:27:38.322855   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 18:27:38.322864   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 18:27:38.322874   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 18:27:38.322882   30068 main.go:141] libmachine: (ha-406291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 18:27:38.322896   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:38.322919   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:27:38.322939   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 18:27:38.322950   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 18:27:38.322968   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0621 18:27:38.322980   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Checking permissions on dir: /home
	I0621 18:27:38.322988   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Skipping /home - not owner
	I0621 18:27:38.324031   30068 main.go:141] libmachine: (ha-406291-m02) define libvirt domain using xml: 
	I0621 18:27:38.324058   30068 main.go:141] libmachine: (ha-406291-m02) <domain type='kvm'>
	I0621 18:27:38.324071   30068 main.go:141] libmachine: (ha-406291-m02)   <name>ha-406291-m02</name>
	I0621 18:27:38.324078   30068 main.go:141] libmachine: (ha-406291-m02)   <memory unit='MiB'>2200</memory>
	I0621 18:27:38.324087   30068 main.go:141] libmachine: (ha-406291-m02)   <vcpu>2</vcpu>
	I0621 18:27:38.324098   30068 main.go:141] libmachine: (ha-406291-m02)   <features>
	I0621 18:27:38.324107   30068 main.go:141] libmachine: (ha-406291-m02)     <acpi/>
	I0621 18:27:38.324116   30068 main.go:141] libmachine: (ha-406291-m02)     <apic/>
	I0621 18:27:38.324125   30068 main.go:141] libmachine: (ha-406291-m02)     <pae/>
	I0621 18:27:38.324134   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324149   30068 main.go:141] libmachine: (ha-406291-m02)   </features>
	I0621 18:27:38.324164   30068 main.go:141] libmachine: (ha-406291-m02)   <cpu mode='host-passthrough'>
	I0621 18:27:38.324173   30068 main.go:141] libmachine: (ha-406291-m02)   
	I0621 18:27:38.324184   30068 main.go:141] libmachine: (ha-406291-m02)   </cpu>
	I0621 18:27:38.324199   30068 main.go:141] libmachine: (ha-406291-m02)   <os>
	I0621 18:27:38.324209   30068 main.go:141] libmachine: (ha-406291-m02)     <type>hvm</type>
	I0621 18:27:38.324220   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='cdrom'/>
	I0621 18:27:38.324231   30068 main.go:141] libmachine: (ha-406291-m02)     <boot dev='hd'/>
	I0621 18:27:38.324258   30068 main.go:141] libmachine: (ha-406291-m02)     <bootmenu enable='no'/>
	I0621 18:27:38.324280   30068 main.go:141] libmachine: (ha-406291-m02)   </os>
	I0621 18:27:38.324293   30068 main.go:141] libmachine: (ha-406291-m02)   <devices>
	I0621 18:27:38.324310   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='cdrom'>
	I0621 18:27:38.324333   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/boot2docker.iso'/>
	I0621 18:27:38.324344   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hdc' bus='scsi'/>
	I0621 18:27:38.324350   30068 main.go:141] libmachine: (ha-406291-m02)       <readonly/>
	I0621 18:27:38.324357   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324363   30068 main.go:141] libmachine: (ha-406291-m02)     <disk type='file' device='disk'>
	I0621 18:27:38.324375   30068 main.go:141] libmachine: (ha-406291-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 18:27:38.324390   30068 main.go:141] libmachine: (ha-406291-m02)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/ha-406291-m02.rawdisk'/>
	I0621 18:27:38.324401   30068 main.go:141] libmachine: (ha-406291-m02)       <target dev='hda' bus='virtio'/>
	I0621 18:27:38.324412   30068 main.go:141] libmachine: (ha-406291-m02)     </disk>
	I0621 18:27:38.324421   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324431   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='mk-ha-406291'/>
	I0621 18:27:38.324442   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324453   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324465   30068 main.go:141] libmachine: (ha-406291-m02)     <interface type='network'>
	I0621 18:27:38.324474   30068 main.go:141] libmachine: (ha-406291-m02)       <source network='default'/>
	I0621 18:27:38.324481   30068 main.go:141] libmachine: (ha-406291-m02)       <model type='virtio'/>
	I0621 18:27:38.324493   30068 main.go:141] libmachine: (ha-406291-m02)     </interface>
	I0621 18:27:38.324503   30068 main.go:141] libmachine: (ha-406291-m02)     <serial type='pty'>
	I0621 18:27:38.324516   30068 main.go:141] libmachine: (ha-406291-m02)       <target port='0'/>
	I0621 18:27:38.324527   30068 main.go:141] libmachine: (ha-406291-m02)     </serial>
	I0621 18:27:38.324540   30068 main.go:141] libmachine: (ha-406291-m02)     <console type='pty'>
	I0621 18:27:38.324553   30068 main.go:141] libmachine: (ha-406291-m02)       <target type='serial' port='0'/>
	I0621 18:27:38.324562   30068 main.go:141] libmachine: (ha-406291-m02)     </console>
	I0621 18:27:38.324572   30068 main.go:141] libmachine: (ha-406291-m02)     <rng model='virtio'>
	I0621 18:27:38.324596   30068 main.go:141] libmachine: (ha-406291-m02)       <backend model='random'>/dev/random</backend>
	I0621 18:27:38.324609   30068 main.go:141] libmachine: (ha-406291-m02)     </rng>
	I0621 18:27:38.324630   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324640   30068 main.go:141] libmachine: (ha-406291-m02)     
	I0621 18:27:38.324648   30068 main.go:141] libmachine: (ha-406291-m02)   </devices>
	I0621 18:27:38.324660   30068 main.go:141] libmachine: (ha-406291-m02) </domain>
	I0621 18:27:38.324670   30068 main.go:141] libmachine: (ha-406291-m02) 
	I0621 18:27:38.332042   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:20:08:0e in network default
	I0621 18:27:38.332641   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring networks are active...
	I0621 18:27:38.332676   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:38.333428   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network default is active
	I0621 18:27:38.333804   30068 main.go:141] libmachine: (ha-406291-m02) Ensuring network mk-ha-406291 is active
	I0621 18:27:38.334296   30068 main.go:141] libmachine: (ha-406291-m02) Getting domain xml...
	I0621 18:27:38.335120   30068 main.go:141] libmachine: (ha-406291-m02) Creating domain...
	I0621 18:27:39.549305   30068 main.go:141] libmachine: (ha-406291-m02) Waiting to get IP...
	I0621 18:27:39.550967   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.551951   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.551976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.551936   30460 retry.go:31] will retry after 267.635955ms: waiting for machine to come up
	I0621 18:27:39.821522   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:39.821997   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:39.822029   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:39.821946   30460 retry.go:31] will retry after 374.873977ms: waiting for machine to come up
	I0621 18:27:40.198386   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.198873   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.198904   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.198809   30460 retry.go:31] will retry after 315.815993ms: waiting for machine to come up
	I0621 18:27:40.516366   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:40.516862   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:40.516886   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:40.516817   30460 retry.go:31] will retry after 541.866776ms: waiting for machine to come up
	I0621 18:27:41.060525   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.061206   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.061240   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.061128   30460 retry.go:31] will retry after 493.062164ms: waiting for machine to come up
	I0621 18:27:41.555747   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:41.556109   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:41.556139   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:41.556061   30460 retry.go:31] will retry after 805.68132ms: waiting for machine to come up
	I0621 18:27:42.362929   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:42.363432   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:42.363464   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:42.363390   30460 retry.go:31] will retry after 986.445399ms: waiting for machine to come up
	I0621 18:27:43.351818   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:43.352265   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:43.352293   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:43.352201   30460 retry.go:31] will retry after 1.001415085s: waiting for machine to come up
	I0621 18:27:44.355253   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:44.355689   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:44.355710   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:44.355671   30460 retry.go:31] will retry after 1.270979624s: waiting for machine to come up
	I0621 18:27:45.627787   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:45.628323   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:45.628354   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:45.628272   30460 retry.go:31] will retry after 2.328221347s: waiting for machine to come up
	I0621 18:27:47.958352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:47.958918   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:47.958945   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:47.958858   30460 retry.go:31] will retry after 2.603205559s: waiting for machine to come up
	I0621 18:27:50.565502   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:50.565956   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:50.565982   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:50.565839   30460 retry.go:31] will retry after 3.267607258s: waiting for machine to come up
	I0621 18:27:53.834801   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:53.835311   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find current IP address of domain ha-406291-m02 in network mk-ha-406291
	I0621 18:27:53.835344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | I0621 18:27:53.835270   30460 retry.go:31] will retry after 4.450176964s: waiting for machine to come up
	I0621 18:27:58.286744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287205   30068 main.go:141] libmachine: (ha-406291-m02) Found IP for machine: 192.168.39.89
	I0621 18:27:58.287228   30068 main.go:141] libmachine: (ha-406291-m02) Reserving static IP address...
	I0621 18:27:58.287241   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.287601   30068 main.go:141] libmachine: (ha-406291-m02) DBG | unable to find host DHCP lease matching {name: "ha-406291-m02", mac: "52:54:00:a6:9a:09", ip: "192.168.39.89"} in network mk-ha-406291
	I0621 18:27:58.359643   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Getting to WaitForSSH function...
	I0621 18:27:58.359672   30068 main.go:141] libmachine: (ha-406291-m02) Reserved static IP address: 192.168.39.89
	I0621 18:27:58.359686   30068 main.go:141] libmachine: (ha-406291-m02) Waiting for SSH to be available...
	I0621 18:27:58.362234   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362656   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.362687   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.362831   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH client type: external
	I0621 18:27:58.362856   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa (-rw-------)
	I0621 18:27:58.362889   30068 main.go:141] libmachine: (ha-406291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 18:27:58.362901   30068 main.go:141] libmachine: (ha-406291-m02) DBG | About to run SSH command:
	I0621 18:27:58.362914   30068 main.go:141] libmachine: (ha-406291-m02) DBG | exit 0
	I0621 18:27:58.489760   30068 main.go:141] libmachine: (ha-406291-m02) DBG | SSH cmd err, output: <nil>: 
	I0621 18:27:58.490247   30068 main.go:141] libmachine: (ha-406291-m02) KVM machine creation complete!
	I0621 18:27:58.490512   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:58.491093   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:58.491506   30068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 18:27:58.491523   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:27:58.492807   30068 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 18:27:58.492820   30068 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 18:27:58.492825   30068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 18:27:58.492853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.495422   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.495802   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.495822   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.496013   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.496199   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496377   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.496515   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.496690   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.496943   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.496957   30068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 18:27:58.609072   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.609094   30068 main.go:141] libmachine: Detecting the provisioner...
	I0621 18:27:58.609101   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.611976   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612412   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.612450   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.612655   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.612869   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613083   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.613234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.613421   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.613617   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.613629   30068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 18:27:58.726636   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 18:27:58.726736   30068 main.go:141] libmachine: found compatible host: buildroot
	I0621 18:27:58.726751   30068 main.go:141] libmachine: Provisioning with buildroot...
	I0621 18:27:58.726768   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727017   30068 buildroot.go:166] provisioning hostname "ha-406291-m02"
	I0621 18:27:58.727040   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.727234   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.729851   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730255   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.730296   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.730453   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.730628   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730787   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.730932   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.731090   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.731271   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.731295   30068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291-m02 && echo "ha-406291-m02" | sudo tee /etc/hostname
	I0621 18:27:58.855682   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291-m02
	
	I0621 18:27:58.855710   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.858373   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858679   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.858702   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.858921   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:58.859107   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859289   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:58.859473   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:58.859613   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:58.859768   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:58.859784   30068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:27:58.979692   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:27:58.979722   30068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:27:58.979735   30068 buildroot.go:174] setting up certificates
	I0621 18:27:58.979743   30068 provision.go:84] configureAuth start
	I0621 18:27:58.979750   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetMachineName
	I0621 18:27:58.980076   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:58.982730   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983078   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.983110   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.983252   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:58.985344   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985701   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:58.985721   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:58.985890   30068 provision.go:143] copyHostCerts
	I0621 18:27:58.985924   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.985962   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:27:58.985976   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:27:58.986057   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:27:58.986156   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986180   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:27:58.986187   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:27:58.986229   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:27:58.986293   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986317   30068 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:27:58.986326   30068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:27:58.986360   30068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:27:58.986426   30068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291-m02 san=[127.0.0.1 192.168.39.89 ha-406291-m02 localhost minikube]
	I0621 18:27:59.066564   30068 provision.go:177] copyRemoteCerts
	I0621 18:27:59.066626   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:27:59.066653   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.069578   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.069924   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.069947   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.070132   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.070298   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.070432   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.070553   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.157218   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:27:59.157315   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 18:27:59.181198   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:27:59.181277   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:27:59.204590   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:27:59.204671   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 18:27:59.228836   30068 provision.go:87] duration metric: took 249.081961ms to configureAuth
	I0621 18:27:59.228857   30068 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:27:59.229023   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:27:59.229086   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.231759   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232083   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.232114   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.232338   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.232525   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232684   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.232859   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.233030   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.233222   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.233247   30068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:27:59.513149   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:27:59.513176   30068 main.go:141] libmachine: Checking connection to Docker...
	I0621 18:27:59.513184   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetURL
	I0621 18:27:59.514352   30068 main.go:141] libmachine: (ha-406291-m02) DBG | Using libvirt version 6000000
	I0621 18:27:59.516825   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517208   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.517232   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.517421   30068 main.go:141] libmachine: Docker is up and running!
	I0621 18:27:59.517438   30068 main.go:141] libmachine: Reticulating splines...
	I0621 18:27:59.517446   30068 client.go:171] duration metric: took 21.562982419s to LocalClient.Create
	I0621 18:27:59.517469   30068 start.go:167] duration metric: took 21.563040702s to libmachine.API.Create "ha-406291"
	I0621 18:27:59.517482   30068 start.go:293] postStartSetup for "ha-406291-m02" (driver="kvm2")
	I0621 18:27:59.517494   30068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:27:59.517516   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.517768   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:27:59.517792   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.520113   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520510   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.520540   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.520681   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.520881   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.521084   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.521256   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.607755   30068 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:27:59.611555   30068 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:27:59.611581   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:27:59.611696   30068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:27:59.611804   30068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:27:59.611817   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:27:59.611939   30068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:27:59.620359   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:27:59.643420   30068 start.go:296] duration metric: took 125.923821ms for postStartSetup
	I0621 18:27:59.643465   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetConfigRaw
	I0621 18:27:59.644062   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.646345   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646685   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.646713   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.646924   30068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:27:59.647158   30068 start.go:128] duration metric: took 21.710666055s to createHost
	I0621 18:27:59.647181   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.649469   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649766   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.649808   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.649962   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.650164   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650334   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.650463   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.650585   30068 main.go:141] libmachine: Using SSH client type: native
	I0621 18:27:59.650778   30068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0621 18:27:59.650790   30068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:27:59.762223   30068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718994479.737744516
	
	I0621 18:27:59.762248   30068 fix.go:216] guest clock: 1718994479.737744516
	I0621 18:27:59.762259   30068 fix.go:229] Guest: 2024-06-21 18:27:59.737744516 +0000 UTC Remote: 2024-06-21 18:27:59.647170431 +0000 UTC m=+77.232139235 (delta=90.574085ms)
	I0621 18:27:59.762274   30068 fix.go:200] guest clock delta is within tolerance: 90.574085ms
	I0621 18:27:59.762279   30068 start.go:83] releasing machines lock for "ha-406291-m02", held for 21.825898335s
	I0621 18:27:59.762294   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.762550   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:27:59.765379   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.765744   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.765772   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.768017   30068 out.go:177] * Found network options:
	I0621 18:27:59.769201   30068 out.go:177]   - NO_PROXY=192.168.39.198
	W0621 18:27:59.770311   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.770350   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.770853   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771049   30068 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:27:59.771143   30068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:27:59.771180   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	W0621 18:27:59.771247   30068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0621 18:27:59.771305   30068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:27:59.771322   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:27:59.774073   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774210   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774455   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774482   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774586   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774595   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:27:59.774615   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:27:59.774740   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:27:59.774796   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774875   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:27:59.774963   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775030   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:27:59.775150   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:27:59.775184   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:28:00.009851   30068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:28:00.016373   30068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:28:00.016450   30068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:28:00.032199   30068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 18:28:00.032221   30068 start.go:494] detecting cgroup driver to use...
	I0621 18:28:00.032283   30068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:28:00.047343   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:28:00.061720   30068 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:28:00.061774   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:28:00.074668   30068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:28:00.087919   30068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:28:00.213060   30068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:28:00.376339   30068 docker.go:233] disabling docker service ...
	I0621 18:28:00.376406   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:28:00.391732   30068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:28:00.405305   30068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:28:00.525867   30068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:28:00.642362   30068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:28:00.656276   30068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:28:00.673811   30068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:28:00.673883   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.683794   30068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:28:00.683849   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.693601   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.703298   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.712924   30068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:28:00.722921   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.733272   30068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.749781   30068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:28:00.759708   30068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:28:00.768749   30068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 18:28:00.768811   30068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 18:28:00.780758   30068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:28:00.789993   30068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:28:00.904855   30068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:28:01.039631   30068 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:28:01.039706   30068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:28:01.044480   30068 start.go:562] Will wait 60s for crictl version
	I0621 18:28:01.044536   30068 ssh_runner.go:195] Run: which crictl
	I0621 18:28:01.048220   30068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:28:01.089333   30068 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:28:01.089402   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.115665   30068 ssh_runner.go:195] Run: crio --version
	I0621 18:28:01.144585   30068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:28:01.145952   30068 out.go:177]   - env NO_PROXY=192.168.39.198
	I0621 18:28:01.147149   30068 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:28:01.149745   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150121   30068 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:27:51 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:28:01.150153   30068 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:28:01.150424   30068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:28:01.154395   30068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 18:28:01.167802   30068 mustload.go:65] Loading cluster: ha-406291
	I0621 18:28:01.168024   30068 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:28:01.168528   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.168581   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.183458   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0621 18:28:01.183955   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.184452   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.184472   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.184809   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.185006   30068 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:28:01.186504   30068 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:28:01.186796   30068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:28:01.186838   30068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:28:01.201898   30068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0621 18:28:01.202307   30068 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:28:01.202715   30068 main.go:141] libmachine: Using API Version  1
	I0621 18:28:01.202735   30068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:28:01.203060   30068 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:28:01.203242   30068 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:28:01.203402   30068 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.89
	I0621 18:28:01.203414   30068 certs.go:194] generating shared ca certs ...
	I0621 18:28:01.203427   30068 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.203536   30068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:28:01.203569   30068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:28:01.203578   30068 certs.go:256] generating profile certs ...
	I0621 18:28:01.203637   30068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:28:01.203663   30068 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63
	I0621 18:28:01.203682   30068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:28:01.277240   30068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 ...
	I0621 18:28:01.277269   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63: {Name:mk0eb1e86875fe5e87f845f9e621f66001c859bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277433   30068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 ...
	I0621 18:28:01.277446   30068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63: {Name:mk95e28e76a927e44fae3dabafa76ecc474c70ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:28:01.277517   30068 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:28:01.277686   30068 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.abe9db63 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:28:01.277852   30068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:28:01.277870   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:28:01.277883   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:28:01.277894   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:28:01.277906   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:28:01.277922   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:28:01.277934   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:28:01.277946   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:28:01.277957   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:28:01.278003   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:28:01.278030   30068 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:28:01.278039   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:28:01.278059   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:28:01.278080   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:28:01.278100   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:28:01.278136   30068 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:28:01.278162   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.278179   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.278191   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.278220   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:28:01.281289   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281749   30068 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:28:01.281771   30068 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:28:01.281960   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:28:01.282180   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:28:01.282351   30068 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:28:01.282534   30068 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:28:01.350153   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0621 18:28:01.355146   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0621 18:28:01.366317   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0621 18:28:01.370418   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0621 18:28:01.381527   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0621 18:28:01.385371   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0621 18:28:01.395583   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0621 18:28:01.399523   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0621 18:28:01.409427   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0621 18:28:01.413340   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0621 18:28:01.424281   30068 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0621 18:28:01.428574   30068 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0621 18:28:01.443501   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:28:01.467141   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:28:01.489464   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:28:01.512839   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:28:01.536345   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 18:28:01.560903   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:28:01.585228   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:28:01.609236   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:28:01.632797   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:28:01.657717   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:28:01.680728   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:28:01.704813   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0621 18:28:01.722206   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0621 18:28:01.739548   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0621 18:28:01.757066   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0621 18:28:01.773769   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0621 18:28:01.790648   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0621 18:28:01.807019   30068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0621 18:28:01.824606   30068 ssh_runner.go:195] Run: openssl version
	I0621 18:28:01.830760   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:28:01.841994   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846701   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.846753   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:28:01.852556   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:28:01.863407   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:28:01.874586   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879134   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.879185   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:28:01.884636   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:28:01.895639   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:28:01.907107   30068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911747   30068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.911813   30068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:28:01.917537   30068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:28:01.928452   30068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:28:01.932569   30068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 18:28:01.932640   30068 kubeadm.go:928] updating node {m02 192.168.39.89 8443 v1.30.2 crio true true} ...
	I0621 18:28:01.932831   30068 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:28:01.932869   30068 kube-vip.go:115] generating kube-vip config ...
	I0621 18:28:01.932919   30068 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:28:01.949970   30068 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:28:01.950046   30068 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:28:01.950102   30068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.960116   30068 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0621 18:28:01.960197   30068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0621 18:28:01.969893   30068 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0621 18:28:01.969926   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.969997   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0621 18:28:01.970033   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0621 18:28:01.970001   30068 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0621 18:28:01.974344   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0621 18:28:01.974375   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0621 18:28:02.755689   30068 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.755764   30068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0621 18:28:02.760415   30068 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0621 18:28:02.760448   30068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0621 18:28:55.051081   30068 out.go:177] 
	W0621 18:28:55.052955   30068 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/19112-8111/.minikube/cache/linux/amd64/v1.30.2/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0 0x49e27e0] Decompressors:map[bz2:0xc000769610 gz:0xc000769618 tar:0xc0007695c0 tar.bz2:0xc0007695d0 tar.gz:0xc0007695e0 tar.xz:0xc0007695f0 tar.zst:0xc000769600 tbz2:0xc0007695d0 tgz:0xc0007695e0 txz:0xc0007695f0 tzst:0xc000769600 xz:0xc000769620 zip:0xc000769630 zst:0xc000769628] Getters:map[file:0xc0009371c0 http:0xc
0008bcf50 https:0xc0008bcfa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.3:46716->151.101.193.55:443: read: connection reset by peer
	W0621 18:28:55.052979   30068 out.go:239] * 
	W0621 18:28:55.053829   30068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:28:55.055312   30068 out.go:177] 
	
	
	==> CRI-O <==
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.011877954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995586011855952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14f83d8a-1830-4a23-9792-445bf3a2088b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.012384335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=707603ac-2313-441b-a877-55178239fad1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.012455046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=707603ac-2313-441b-a877-55178239fad1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.012677455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=707603ac-2313-441b-a877-55178239fad1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.047197013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49e89477-7107-47ef-8904-b0bc9aa7664f name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.047283910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49e89477-7107-47ef-8904-b0bc9aa7664f name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.048108298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a58b6679-f983-4f48-8c8e-b07366ccc399 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.048690990Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995586048667585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a58b6679-f983-4f48-8c8e-b07366ccc399 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.049192593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e39f3ba-ef03-4663-b6dc-59b292a62776 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.049247853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e39f3ba-ef03-4663-b6dc-59b292a62776 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.049472778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e39f3ba-ef03-4663-b6dc-59b292a62776 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.086794522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58ce0a13-50a6-4f6a-8ec2-b21a86db22e4 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.086867101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58ce0a13-50a6-4f6a-8ec2-b21a86db22e4 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.087944670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fa5a8f3-8789-4661-8cf1-d42e0b1dff1f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.088445247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995586088420241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fa5a8f3-8789-4661-8cf1-d42e0b1dff1f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.089010332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f94e00fa-d587-41c9-95dd-6e4ffe17b6aa name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.089061916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f94e00fa-d587-41c9-95dd-6e4ffe17b6aa name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.089341050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f94e00fa-d587-41c9-95dd-6e4ffe17b6aa name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.124548893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86293290-5ac7-40d5-bcda-64504d2a6175 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.124618882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86293290-5ac7-40d5-bcda-64504d2a6175 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.125766920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4fdec80-0cd3-4df0-ae4e-2f12c6e3d479 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.126253732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718995586126209491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4fdec80-0cd3-4df0-ae4e-2f12c6e3d479 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.126840558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea750684-91b0-45ff-99aa-eb975d86d0a2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.126893822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea750684-91b0-45ff-99aa-eb975d86d0a2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:46:26 ha-406291 crio[679]: time="2024-06-21 18:46:26.127196780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718994540131727223,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25,PodSandboxId:59eb38b2794b02c40a970ef9379dae06b25af94b5b9c194af2f39044b8a80656,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459904595458,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718994459852756179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c,PodSandboxId:a68caa8578d30bee67d56155e9bfeab46712a74a991014cd43e82838bc7efe53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718994459870343273,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5a
f0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:17189944
58069897639,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718994457887540977,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631,PodSandboxId:79ad95611cf2281c2deb0a5f369eb5271fac76b4211a8efb382176679a1375b7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718994441017516435,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29bf44d365a415a68be28c9aad205c23,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718994438148424764,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718994438095663243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718994438069298161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718994438003779700,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea750684-91b0-45ff-99aa-eb975d86d0a2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   17 minutes ago      Running             busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	6d732e2622f11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   0                   59eb38b2794b0       coredns-7db6d8ff4d-7ng4v
	6088ccc5ec4be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   0                   a68caa8578d30       coredns-7db6d8ff4d-nx5xs
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      18 minutes ago      Running             kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      18 minutes ago      Running             kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	96a229fabb5aa       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     19 minutes ago      Running             kube-vip                  0                   79ad95611cf22       kube-vip-ha-406291
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      19 minutes ago      Running             kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      19 minutes ago      Running             kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      19 minutes ago      Running             kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57758 - 16030 "HINFO IN 938012208132191314.8379741084222464033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014128651s
	[INFO] 10.244.0.4:60864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000870211s
	[INFO] 10.244.0.4:49527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014553s
	[INFO] 10.244.0.4:59987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181145s
	[INFO] 10.244.0.4:59378 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009664502s
	[INFO] 10.244.0.4:59188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181625s
	[INFO] 10.244.0.4:33100 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137671s
	[INFO] 10.244.0.4:43551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129631s
	[INFO] 10.244.0.4:59759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152418s
	[INFO] 10.244.0.4:60292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090372s
	[INFO] 10.244.0.4:47967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093215s
	[INFO] 10.244.0.4:44642 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175452s
	[INFO] 10.244.0.4:49677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070108s
	
	
	==> coredns [6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45911 - 30730 "HINFO IN 2397840142540691982.2649863782968500509. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014966559s
	[INFO] 10.244.0.4:38404 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013105268s
	[INFO] 10.244.0.4:49299 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.225770527s
	[INFO] 10.244.0.4:41342 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010990835s
	[INFO] 10.244.0.4:55838 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003903098s
	[INFO] 10.244.0.4:59078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163236s
	[INFO] 10.244.0.4:39541 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147137s
	[INFO] 10.244.0.4:47420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120366s
	[INFO] 10.244.0.4:54009 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000255172s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:46:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:44:44 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:44:44 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:44:44 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:44:44 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal  NodeReady                18m   kubelet          Node ha-406291 status is now: NodeReady
	
	
	Name:               ha-406291-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T18_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:46:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    ha-406291-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aeb6d6b65b246d89e229cf308cb4c9a
	  System UUID:                7aeb6d6b-65b2-46d8-9e22-9cf308cb4c9a
	  Boot ID:                    077bb108-4737-40c3-9892-3695b5a49d4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drm4v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-xrm6w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m25s
	  kube-system                 kube-proxy-vknv4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x2 over 5m25s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x2 over 5m25s)  kubelet          Node ha-406291-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x2 over 5m25s)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-406291-m03 event: Registered Node ha-406291-m03 in Controller
	  Normal  NodeReady                5m16s                  kubelet          Node ha-406291-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun21 18:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051748] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458081] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.725935] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:18.939339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.939349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.93936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:27:18.949394Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.951989Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:27:18.952029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.952218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:27:18.966375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.966591Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:27:18.968078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.969834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:27:18.973596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:18.986355Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:42:19.558621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1509}
	{"level":"info","ts":"2024-06-21T18:42:19.563203Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1509,"took":"4.232264ms","hash":4134822789,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2011136,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-06-21T18:42:19.563247Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4134822789,"revision":1509,"compact-revision":969}
	
	
	==> kernel <==
	 18:46:26 up 19 min,  0 users,  load average: 1.13, 0.39, 0.19
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:45:19.791468       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:45:29.797097       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:45:29.797324       1 main.go:227] handling current node
	I0621 18:45:29.797358       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:45:29.797419       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:45:39.801918       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:45:39.802012       1 main.go:227] handling current node
	I0621 18:45:39.802036       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:45:39.802052       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:45:49.814318       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:45:49.814403       1 main.go:227] handling current node
	I0621 18:45:49.814428       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:45:49.814433       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:45:59.819469       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:45:59.819500       1 main.go:227] handling current node
	I0621 18:45:59.819510       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:45:59.819515       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:46:09.827898       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:46:09.828096       1 main.go:227] handling current node
	I0621 18:46:09.828197       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:46:09.828225       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:46:19.840901       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:46:19.840942       1 main.go:227] handling current node
	I0621 18:46:19.840953       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:46:19.840958       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:27:21.231033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:27:21.231057       1 policy_source.go:224] refreshing policies
	E0621 18:27:21.244004       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0621 18:27:21.291900       1 controller.go:615] quota admission added evaluator for: namespaces
	I0621 18:27:21.301249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:27:22.093764       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0621 18:27:22.100226       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0621 18:27:22.100345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:27:22.679124       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 18:27:22.717908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 18:27:22.803597       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0621 18:27:22.812663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I0621 18:27:22.813674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:27:22.817676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 18:27:23.142771       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 18:27:24.323202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 18:27:24.338622       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0621 18:27:24.532806       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 18:27:36.921775       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0621 18:27:37.247444       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0621 18:40:26.217258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52318: use of closed network connection
	E0621 18:40:26.646809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52394: use of closed network connection
	E0621 18:40:27.039177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52460: use of closed network connection
	E0621 18:40:29.475531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0621 18:40:29.631306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52614: use of closed network connection
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:37.660938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="161.085µs"
	I0621 18:27:39.328050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.475µs"
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	W0621 18:27:21.175406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0621 18:27:21.176948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 18:27:21.176960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 18:42:24 ha-406291 kubelet[1367]: E0621 18:42:24.484793    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:42:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:42:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:42:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:42:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:43:24 ha-406291 kubelet[1367]: E0621 18:43:24.483749    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:43:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:43:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:43:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:43:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:44:24 ha-406291 kubelet[1367]: E0621 18:44:24.483527    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:44:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:44:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:44:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:44:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:45:24 ha-406291 kubelet[1367]: E0621 18:45:24.484220    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:45:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:45:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:45:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:45:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:46:24 ha-406291 kubelet[1367]: E0621 18:46:24.483559    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:46:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:46:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:46:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:46:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  7m3s (x3 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  3s (x3 over 5m17s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (463.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-406291 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-406291 -v=7 --alsologtostderr
E0621 18:47:17.910046   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-406291 -v=7 --alsologtostderr: exit status 82 (2m0.481078803s)

                                                
                                                
-- stdout --
	* Stopping node "ha-406291-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:46:27.347958   37200 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:46:27.348256   37200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:46:27.348268   37200 out.go:304] Setting ErrFile to fd 2...
	I0621 18:46:27.348272   37200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:46:27.348479   37200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:46:27.348736   37200 out.go:298] Setting JSON to false
	I0621 18:46:27.348831   37200 mustload.go:65] Loading cluster: ha-406291
	I0621 18:46:27.349218   37200 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:46:27.349315   37200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:46:27.349499   37200 mustload.go:65] Loading cluster: ha-406291
	I0621 18:46:27.349667   37200 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:46:27.349707   37200 stop.go:39] StopHost: ha-406291-m03
	I0621 18:46:27.350160   37200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:46:27.350218   37200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:46:27.365677   37200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I0621 18:46:27.366318   37200 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:46:27.367369   37200 main.go:141] libmachine: Using API Version  1
	I0621 18:46:27.367457   37200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:46:27.368058   37200 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:46:27.370976   37200 out.go:177] * Stopping node "ha-406291-m03"  ...
	I0621 18:46:27.372796   37200 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0621 18:46:27.372827   37200 main.go:141] libmachine: (ha-406291-m03) Calling .DriverName
	I0621 18:46:27.373138   37200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0621 18:46:27.373169   37200 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHHostname
	I0621 18:46:27.375937   37200 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:27.376363   37200 main.go:141] libmachine: (ha-406291-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:72:f9", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:40:45 +0000 UTC Type:0 Mac:52:54:00:26:72:f9 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-406291-m03 Clientid:01:52:54:00:26:72:f9}
	I0621 18:46:27.376392   37200 main.go:141] libmachine: (ha-406291-m03) DBG | domain ha-406291-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:26:72:f9 in network mk-ha-406291
	I0621 18:46:27.376565   37200 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHPort
	I0621 18:46:27.376742   37200 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHKeyPath
	I0621 18:46:27.376916   37200 main.go:141] libmachine: (ha-406291-m03) Calling .GetSSHUsername
	I0621 18:46:27.377069   37200 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m03/id_rsa Username:docker}
	I0621 18:46:27.464918   37200 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0621 18:46:27.519871   37200 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0621 18:46:27.572969   37200 main.go:141] libmachine: Stopping "ha-406291-m03"...
	I0621 18:46:27.573013   37200 main.go:141] libmachine: (ha-406291-m03) Calling .GetState
	I0621 18:46:27.574704   37200 main.go:141] libmachine: (ha-406291-m03) Calling .Stop
	I0621 18:46:27.578080   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 0/120
	I0621 18:46:28.580132   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 1/120
	I0621 18:46:29.581697   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 2/120
	I0621 18:46:30.583523   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 3/120
	I0621 18:46:31.584832   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 4/120
	I0621 18:46:32.587105   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 5/120
	I0621 18:46:33.588749   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 6/120
	I0621 18:46:34.590227   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 7/120
	I0621 18:46:35.592293   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 8/120
	I0621 18:46:36.593855   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 9/120
	I0621 18:46:37.596207   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 10/120
	I0621 18:46:38.597577   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 11/120
	I0621 18:46:39.599126   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 12/120
	I0621 18:46:40.600672   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 13/120
	I0621 18:46:41.602195   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 14/120
	I0621 18:46:42.604290   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 15/120
	I0621 18:46:43.605924   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 16/120
	I0621 18:46:44.607658   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 17/120
	I0621 18:46:45.609178   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 18/120
	I0621 18:46:46.610661   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 19/120
	I0621 18:46:47.612945   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 20/120
	I0621 18:46:48.614365   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 21/120
	I0621 18:46:49.615865   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 22/120
	I0621 18:46:50.617402   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 23/120
	I0621 18:46:51.618969   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 24/120
	I0621 18:46:52.621164   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 25/120
	I0621 18:46:53.622734   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 26/120
	I0621 18:46:54.624226   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 27/120
	I0621 18:46:55.625646   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 28/120
	I0621 18:46:56.627309   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 29/120
	I0621 18:46:57.629493   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 30/120
	I0621 18:46:58.631130   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 31/120
	I0621 18:46:59.632601   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 32/120
	I0621 18:47:00.634045   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 33/120
	I0621 18:47:01.635713   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 34/120
	I0621 18:47:02.637950   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 35/120
	I0621 18:47:03.640570   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 36/120
	I0621 18:47:04.641914   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 37/120
	I0621 18:47:05.643163   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 38/120
	I0621 18:47:06.644690   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 39/120
	I0621 18:47:07.646052   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 40/120
	I0621 18:47:08.648445   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 41/120
	I0621 18:47:09.649824   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 42/120
	I0621 18:47:10.651162   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 43/120
	I0621 18:47:11.652620   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 44/120
	I0621 18:47:12.654493   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 45/120
	I0621 18:47:13.656531   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 46/120
	I0621 18:47:14.657958   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 47/120
	I0621 18:47:15.660592   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 48/120
	I0621 18:47:16.661992   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 49/120
	I0621 18:47:17.664458   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 50/120
	I0621 18:47:18.665952   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 51/120
	I0621 18:47:19.667505   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 52/120
	I0621 18:47:20.668967   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 53/120
	I0621 18:47:21.670544   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 54/120
	I0621 18:47:22.672437   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 55/120
	I0621 18:47:23.673949   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 56/120
	I0621 18:47:24.676441   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 57/120
	I0621 18:47:25.677929   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 58/120
	I0621 18:47:26.680294   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 59/120
	I0621 18:47:27.682734   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 60/120
	I0621 18:47:28.684099   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 61/120
	I0621 18:47:29.685540   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 62/120
	I0621 18:47:30.686950   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 63/120
	I0621 18:47:31.688413   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 64/120
	I0621 18:47:32.690299   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 65/120
	I0621 18:47:33.692152   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 66/120
	I0621 18:47:34.693634   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 67/120
	I0621 18:47:35.694922   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 68/120
	I0621 18:47:36.696323   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 69/120
	I0621 18:47:37.698702   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 70/120
	I0621 18:47:38.700471   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 71/120
	I0621 18:47:39.702402   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 72/120
	I0621 18:47:40.703776   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 73/120
	I0621 18:47:41.705074   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 74/120
	I0621 18:47:42.707200   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 75/120
	I0621 18:47:43.708832   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 76/120
	I0621 18:47:44.710467   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 77/120
	I0621 18:47:45.712635   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 78/120
	I0621 18:47:46.714496   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 79/120
	I0621 18:47:47.716596   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 80/120
	I0621 18:47:48.717941   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 81/120
	I0621 18:47:49.719161   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 82/120
	I0621 18:47:50.720373   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 83/120
	I0621 18:47:51.721528   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 84/120
	I0621 18:47:52.723583   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 85/120
	I0621 18:47:53.724956   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 86/120
	I0621 18:47:54.726578   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 87/120
	I0621 18:47:55.728027   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 88/120
	I0621 18:47:56.729712   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 89/120
	I0621 18:47:57.732255   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 90/120
	I0621 18:47:58.733624   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 91/120
	I0621 18:47:59.735728   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 92/120
	I0621 18:48:00.737210   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 93/120
	I0621 18:48:01.738847   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 94/120
	I0621 18:48:02.740904   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 95/120
	I0621 18:48:03.742444   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 96/120
	I0621 18:48:04.744006   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 97/120
	I0621 18:48:05.745416   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 98/120
	I0621 18:48:06.746916   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 99/120
	I0621 18:48:07.749259   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 100/120
	I0621 18:48:08.750656   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 101/120
	I0621 18:48:09.752038   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 102/120
	I0621 18:48:10.753434   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 103/120
	I0621 18:48:11.754746   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 104/120
	I0621 18:48:12.756865   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 105/120
	I0621 18:48:13.758239   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 106/120
	I0621 18:48:14.760522   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 107/120
	I0621 18:48:15.761820   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 108/120
	I0621 18:48:16.763329   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 109/120
	I0621 18:48:17.765867   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 110/120
	I0621 18:48:18.767465   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 111/120
	I0621 18:48:19.768884   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 112/120
	I0621 18:48:20.770389   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 113/120
	I0621 18:48:21.771761   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 114/120
	I0621 18:48:22.773721   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 115/120
	I0621 18:48:23.775094   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 116/120
	I0621 18:48:24.776529   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 117/120
	I0621 18:48:25.777948   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 118/120
	I0621 18:48:26.779236   37200 main.go:141] libmachine: (ha-406291-m03) Waiting for machine to stop 119/120
	I0621 18:48:27.780687   37200 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0621 18:48:27.780744   37200 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0621 18:48:27.782970   37200 out.go:177] 
	W0621 18:48:27.784547   37200 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0621 18:48:27.784570   37200 out.go:239] * 
	* 
	W0621 18:48:27.786811   37200 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:48:27.788209   37200 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-406291 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-406291 --wait=true -v=7 --alsologtostderr
E0621 18:50:54.862558   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-406291 --wait=true -v=7 --alsologtostderr: exit status 80 (5m40.454211892s)

                                                
                                                
-- stdout --
	* [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	* Updating the running kvm2 "ha-406291" VM ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-406291-m02" control-plane node in "ha-406291" cluster
	* Updating the running kvm2 "ha-406291-m02" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.198
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.198
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:48:27.831476   37614 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:48:27.831947   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:48:27.831958   37614 out.go:304] Setting ErrFile to fd 2...
	I0621 18:48:27.831963   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:48:27.832237   37614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:48:27.832938   37614 out.go:298] Setting JSON to false
	I0621 18:48:27.833836   37614 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5406,"bootTime":1718990302,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:48:27.833898   37614 start.go:139] virtualization: kvm guest
	I0621 18:48:27.836380   37614 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:48:27.837785   37614 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:48:27.837821   37614 notify.go:220] Checking for updates...
	I0621 18:48:27.840567   37614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:48:27.841953   37614 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:48:27.843187   37614 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:48:27.844558   37614 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:48:27.845907   37614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:48:27.847613   37614 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:48:27.847732   37614 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:48:27.848413   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:48:27.848482   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:48:27.863080   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0621 18:48:27.863473   37614 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:48:27.864007   37614 main.go:141] libmachine: Using API Version  1
	I0621 18:48:27.864033   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:48:27.864411   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:48:27.864641   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.900101   37614 out.go:177] * Using the kvm2 driver based on existing profile
	I0621 18:48:27.901277   37614 start.go:297] selected driver: kvm2
	I0621 18:48:27.901299   37614 start.go:901] validating driver "kvm2" against &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:48:27.901441   37614 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:48:27.901750   37614 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:48:27.901843   37614 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:48:27.916614   37614 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:48:27.917318   37614 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:48:27.917379   37614 cni.go:84] Creating CNI manager for ""
	I0621 18:48:27.917391   37614 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 18:48:27.917453   37614 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:48:27.917576   37614 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:48:27.919430   37614 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:48:27.920610   37614 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:48:27.920649   37614 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:48:27.920659   37614 cache.go:56] Caching tarball of preloaded images
	I0621 18:48:27.920773   37614 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:48:27.920787   37614 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:48:27.920894   37614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:48:27.921114   37614 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:48:27.921161   37614 start.go:364] duration metric: took 28.141µs to acquireMachinesLock for "ha-406291"
	I0621 18:48:27.921180   37614 start.go:96] Skipping create...Using existing machine configuration
	I0621 18:48:27.921190   37614 fix.go:54] fixHost starting: 
	I0621 18:48:27.921463   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:48:27.921500   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:48:27.936449   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33963
	I0621 18:48:27.936960   37614 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:48:27.937520   37614 main.go:141] libmachine: Using API Version  1
	I0621 18:48:27.937546   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:48:27.937916   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:48:27.938097   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.938231   37614 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:48:27.939757   37614 fix.go:112] recreateIfNeeded on ha-406291: state=Running err=<nil>
	W0621 18:48:27.939772   37614 fix.go:138] unexpected machine state, will restart: <nil>
	I0621 18:48:27.941724   37614 out.go:177] * Updating the running kvm2 "ha-406291" VM ...
	I0621 18:48:27.942997   37614 machine.go:94] provisionDockerMachine start ...
	I0621 18:48:27.943024   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.943206   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:27.945749   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:27.946257   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:27.946287   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:27.946456   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:27.946613   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:27.946788   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:27.946925   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:27.947091   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:27.947292   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:27.947307   37614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0621 18:48:28.051086   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:48:28.051116   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.051394   37614 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:48:28.051420   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.051618   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.054638   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.055076   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.055099   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.055296   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.055524   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.055672   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.055901   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.056090   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.056290   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.056305   37614 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:48:28.169279   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:48:28.169305   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.171914   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.172264   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.172307   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.172459   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.172637   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.172764   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.172937   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.173112   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.173334   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.173358   37614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:48:28.270684   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:48:28.270733   37614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:48:28.270776   37614 buildroot.go:174] setting up certificates
	I0621 18:48:28.270798   37614 provision.go:84] configureAuth start
	I0621 18:48:28.270816   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.271110   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:48:28.274048   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.274413   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.274440   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.274625   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.276911   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.277237   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.277273   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.277425   37614 provision.go:143] copyHostCerts
	I0621 18:48:28.277474   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:48:28.277514   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:48:28.277525   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:48:28.277586   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:48:28.277681   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:48:28.277699   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:48:28.277706   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:48:28.277732   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:48:28.277852   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:48:28.277874   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:48:28.277881   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:48:28.277908   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:48:28.277967   37614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:48:28.770044   37614 provision.go:177] copyRemoteCerts
	I0621 18:48:28.770118   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:48:28.770140   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.772531   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.772859   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.772888   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.773061   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.773274   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.773406   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.773544   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:48:28.851817   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:48:28.851907   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:48:28.875949   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:48:28.876034   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:48:28.899404   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:48:28.899479   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0621 18:48:28.922832   37614 provision.go:87] duration metric: took 652.015125ms to configureAuth
	I0621 18:48:28.922865   37614 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:48:28.923083   37614 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:48:28.923147   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.925724   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.926104   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.926143   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.926302   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.926538   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.926671   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.926850   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.926962   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.927117   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.927134   37614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:49:59.775008   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:49:59.775041   37614 machine.go:97] duration metric: took 1m31.832022982s to provisionDockerMachine
	I0621 18:49:59.775056   37614 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:49:59.775071   37614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:49:59.775090   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:49:59.775469   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:49:59.775508   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.778762   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.779252   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.779278   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.779425   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.779621   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.779730   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.779846   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:49:59.861058   37614 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:49:59.865212   37614 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:49:59.865238   37614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:49:59.865306   37614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:49:59.865412   37614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:49:59.865426   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:49:59.865530   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:49:59.874847   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:49:59.898766   37614 start.go:296] duration metric: took 123.693827ms for postStartSetup
	I0621 18:49:59.898814   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:49:59.899163   37614 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0621 18:49:59.899191   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.902342   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.902758   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.902781   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.902968   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.903148   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.903308   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.903440   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	W0621 18:49:59.980000   37614 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0621 18:49:59.980025   37614 fix.go:56] duration metric: took 1m32.058837235s for fixHost
	I0621 18:49:59.980045   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.983376   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.983859   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.983891   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.984114   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.984357   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.984534   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.984719   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.984900   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:49:59.985122   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:49:59.985139   37614 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0621 18:50:00.091107   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718995800.019349431
	
	I0621 18:50:00.091140   37614 fix.go:216] guest clock: 1718995800.019349431
	I0621 18:50:00.091157   37614 fix.go:229] Guest: 2024-06-21 18:50:00.019349431 +0000 UTC Remote: 2024-06-21 18:49:59.98003189 +0000 UTC m=+92.182726233 (delta=39.317541ms)
	I0621 18:50:00.091202   37614 fix.go:200] guest clock delta is within tolerance: 39.317541ms
	I0621 18:50:00.091209   37614 start.go:83] releasing machines lock for "ha-406291", held for 1m32.170035409s
	I0621 18:50:00.091239   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.091570   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:50:00.094257   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.094684   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.094714   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.094867   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095587   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095720   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095777   37614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:50:00.095826   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:50:00.095948   37614 ssh_runner.go:195] Run: cat /version.json
	I0621 18:50:00.095969   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:50:00.099018   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099048   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099355   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.099392   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099417   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.099546   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:50:00.099547   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099784   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:50:00.099802   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:50:00.099953   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:50:00.099953   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:50:00.100151   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:50:00.100166   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:50:00.100406   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:50:00.221373   37614 ssh_runner.go:195] Run: systemctl --version
	I0621 18:50:00.227389   37614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:50:00.385205   37614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:50:00.394152   37614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:50:00.394215   37614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:50:00.403823   37614 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0621 18:50:00.403852   37614 start.go:494] detecting cgroup driver to use...
	I0621 18:50:00.403906   37614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:50:00.419979   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:50:00.434440   37614 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:50:00.434502   37614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:50:00.448314   37614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:50:00.462079   37614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:50:00.614685   37614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:50:00.759729   37614 docker.go:233] disabling docker service ...
	I0621 18:50:00.759808   37614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:50:00.777480   37614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:50:00.792874   37614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:50:00.942947   37614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:50:01.096969   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:50:01.111115   37614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:50:01.175106   37614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:50:01.175190   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.232028   37614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:50:01.232101   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.280475   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.294904   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.316249   37614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:50:01.333062   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.348820   37614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.371299   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.389314   37614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:50:01.401788   37614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:50:01.422679   37614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:50:01.648445   37614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:50:02.047527   37614 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:50:02.047604   37614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:50:02.052768   37614 start.go:562] Will wait 60s for crictl version
	I0621 18:50:02.052832   37614 ssh_runner.go:195] Run: which crictl
	I0621 18:50:02.056555   37614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:50:02.094299   37614 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:50:02.094367   37614 ssh_runner.go:195] Run: crio --version
	I0621 18:50:02.123963   37614 ssh_runner.go:195] Run: crio --version
	I0621 18:50:02.156468   37614 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:50:02.158024   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:50:02.161125   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:02.161548   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:02.161570   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:02.161875   37614 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:50:02.167481   37614 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:50:02.167692   37614 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:50:02.167755   37614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:50:02.219832   37614 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:50:02.219854   37614 crio.go:433] Images already preloaded, skipping extraction
	I0621 18:50:02.219899   37614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:50:02.255684   37614 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:50:02.255710   37614 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:50:02.255720   37614 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:50:02.255840   37614 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:50:02.255924   37614 ssh_runner.go:195] Run: crio config
	I0621 18:50:02.317976   37614 cni.go:84] Creating CNI manager for ""
	I0621 18:50:02.317997   37614 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 18:50:02.318008   37614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:50:02.318027   37614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:50:02.318155   37614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:50:02.318171   37614 kube-vip.go:115] generating kube-vip config ...
	I0621 18:50:02.318209   37614 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:50:02.331312   37614 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:50:02.331435   37614 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:50:02.331501   37614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:50:02.342410   37614 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:50:02.342501   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:50:02.353833   37614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:50:02.372067   37614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:50:02.391049   37614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:50:02.409310   37614 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0621 18:50:02.427547   37614 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:50:02.433079   37614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:50:02.582453   37614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:50:02.598236   37614 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:50:02.598258   37614 certs.go:194] generating shared ca certs ...
	I0621 18:50:02.598278   37614 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.598473   37614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:50:02.598527   37614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:50:02.598538   37614 certs.go:256] generating profile certs ...
	I0621 18:50:02.598630   37614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:50:02.598657   37614 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995
	I0621 18:50:02.598668   37614 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:50:02.663764   37614 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 ...
	I0621 18:50:02.663805   37614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995: {Name:mk333c8edf0e5497704ceac44948ed6d5eae057c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.664011   37614 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995 ...
	I0621 18:50:02.664028   37614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995: {Name:mk5cd7253a5d75c3e8a117ab1180e6cf66770645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.664122   37614 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:50:02.664288   37614 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:50:02.664452   37614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:50:02.664473   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:50:02.664492   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:50:02.664510   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:50:02.664528   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:50:02.664544   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:50:02.664558   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:50:02.664575   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:50:02.664593   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:50:02.664653   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:50:02.664692   37614 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:50:02.664704   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:50:02.664743   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:50:02.664779   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:50:02.664808   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:50:02.664862   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:50:02.664896   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:02.664913   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:50:02.664932   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:50:02.665576   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:50:02.694113   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:50:02.722523   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:50:02.749537   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:50:02.776614   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0621 18:50:02.805311   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:50:02.832592   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:50:02.857479   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:50:02.881711   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:50:02.907387   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:50:02.934334   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:50:02.959508   37614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:50:02.977465   37614 ssh_runner.go:195] Run: openssl version
	I0621 18:50:02.983767   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:50:02.995314   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.001937   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.002002   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.009327   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:50:03.022240   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:50:03.037533   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.042517   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.042581   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.048576   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:50:03.059273   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:50:03.071497   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.076360   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.076413   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.082259   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:50:03.092484   37614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:50:03.097277   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0621 18:50:03.103376   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0621 18:50:03.109351   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0621 18:50:03.115157   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0621 18:50:03.120911   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0621 18:50:03.126507   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0621 18:50:03.132154   37614 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:50:03.132279   37614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:50:03.132331   37614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:50:03.170290   37614 cri.go:89] found id: "6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0"
	I0621 18:50:03.170317   37614 cri.go:89] found id: "adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea"
	I0621 18:50:03.170320   37614 cri.go:89] found id: "6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25"
	I0621 18:50:03.170323   37614 cri.go:89] found id: "6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c"
	I0621 18:50:03.170326   37614 cri.go:89] found id: "9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b"
	I0621 18:50:03.170329   37614 cri.go:89] found id: "468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d"
	I0621 18:50:03.170331   37614 cri.go:89] found id: "e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547"
	I0621 18:50:03.170334   37614 cri.go:89] found id: "96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631"
	I0621 18:50:03.170336   37614 cri.go:89] found id: "a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d"
	I0621 18:50:03.170341   37614 cri.go:89] found id: "89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8"
	I0621 18:50:03.170344   37614 cri.go:89] found id: "2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3"
	I0621 18:50:03.170346   37614 cri.go:89] found id: "3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31"
	I0621 18:50:03.170349   37614 cri.go:89] found id: ""
	I0621 18:50:03.170399   37614 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-406291 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-406291
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.519079918s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node stop m02 -v=7         | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node start m02 -v=7        | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-406291 -v=7               | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:46 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-406291 -v=7                    | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:46 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-406291 --wait=true -v=7        | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:48 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-406291                    | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:54 UTC |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:48:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:48:27.831476   37614 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:48:27.831947   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:48:27.831958   37614 out.go:304] Setting ErrFile to fd 2...
	I0621 18:48:27.831963   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:48:27.832237   37614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:48:27.832938   37614 out.go:298] Setting JSON to false
	I0621 18:48:27.833836   37614 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5406,"bootTime":1718990302,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:48:27.833898   37614 start.go:139] virtualization: kvm guest
	I0621 18:48:27.836380   37614 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:48:27.837785   37614 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:48:27.837821   37614 notify.go:220] Checking for updates...
	I0621 18:48:27.840567   37614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:48:27.841953   37614 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:48:27.843187   37614 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:48:27.844558   37614 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:48:27.845907   37614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:48:27.847613   37614 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:48:27.847732   37614 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:48:27.848413   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:48:27.848482   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:48:27.863080   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0621 18:48:27.863473   37614 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:48:27.864007   37614 main.go:141] libmachine: Using API Version  1
	I0621 18:48:27.864033   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:48:27.864411   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:48:27.864641   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.900101   37614 out.go:177] * Using the kvm2 driver based on existing profile
	I0621 18:48:27.901277   37614 start.go:297] selected driver: kvm2
	I0621 18:48:27.901299   37614 start.go:901] validating driver "kvm2" against &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:48:27.901441   37614 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:48:27.901750   37614 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:48:27.901843   37614 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:48:27.916614   37614 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:48:27.917318   37614 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:48:27.917379   37614 cni.go:84] Creating CNI manager for ""
	I0621 18:48:27.917391   37614 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 18:48:27.917453   37614 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:48:27.917576   37614 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:48:27.919430   37614 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:48:27.920610   37614 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:48:27.920649   37614 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:48:27.920659   37614 cache.go:56] Caching tarball of preloaded images
	I0621 18:48:27.920773   37614 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:48:27.920787   37614 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:48:27.920894   37614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:48:27.921114   37614 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:48:27.921161   37614 start.go:364] duration metric: took 28.141µs to acquireMachinesLock for "ha-406291"
	I0621 18:48:27.921180   37614 start.go:96] Skipping create...Using existing machine configuration
	I0621 18:48:27.921190   37614 fix.go:54] fixHost starting: 
	I0621 18:48:27.921463   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:48:27.921500   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:48:27.936449   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33963
	I0621 18:48:27.936960   37614 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:48:27.937520   37614 main.go:141] libmachine: Using API Version  1
	I0621 18:48:27.937546   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:48:27.937916   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:48:27.938097   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.938231   37614 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:48:27.939757   37614 fix.go:112] recreateIfNeeded on ha-406291: state=Running err=<nil>
	W0621 18:48:27.939772   37614 fix.go:138] unexpected machine state, will restart: <nil>
	I0621 18:48:27.941724   37614 out.go:177] * Updating the running kvm2 "ha-406291" VM ...
	I0621 18:48:27.942997   37614 machine.go:94] provisionDockerMachine start ...
	I0621 18:48:27.943024   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.943206   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:27.945749   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:27.946257   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:27.946287   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:27.946456   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:27.946613   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:27.946788   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:27.946925   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:27.947091   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:27.947292   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:27.947307   37614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0621 18:48:28.051086   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:48:28.051116   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.051394   37614 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:48:28.051420   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.051618   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.054638   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.055076   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.055099   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.055296   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.055524   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.055672   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.055901   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.056090   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.056290   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.056305   37614 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:48:28.169279   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:48:28.169305   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.171914   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.172264   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.172307   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.172459   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.172637   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.172764   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.172937   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.173112   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.173334   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.173358   37614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:48:28.270684   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:48:28.270733   37614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:48:28.270776   37614 buildroot.go:174] setting up certificates
	I0621 18:48:28.270798   37614 provision.go:84] configureAuth start
	I0621 18:48:28.270816   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.271110   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:48:28.274048   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.274413   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.274440   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.274625   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.276911   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.277237   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.277273   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.277425   37614 provision.go:143] copyHostCerts
	I0621 18:48:28.277474   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:48:28.277514   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:48:28.277525   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:48:28.277586   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:48:28.277681   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:48:28.277699   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:48:28.277706   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:48:28.277732   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:48:28.277852   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:48:28.277874   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:48:28.277881   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:48:28.277908   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:48:28.277967   37614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:48:28.770044   37614 provision.go:177] copyRemoteCerts
	I0621 18:48:28.770118   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:48:28.770140   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.772531   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.772859   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.772888   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.773061   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.773274   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.773406   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.773544   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:48:28.851817   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:48:28.851907   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:48:28.875949   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:48:28.876034   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:48:28.899404   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:48:28.899479   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0621 18:48:28.922832   37614 provision.go:87] duration metric: took 652.015125ms to configureAuth
	I0621 18:48:28.922865   37614 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:48:28.923083   37614 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:48:28.923147   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.925724   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.926104   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.926143   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.926302   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.926538   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.926671   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.926850   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.926962   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.927117   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.927134   37614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:49:59.775008   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:49:59.775041   37614 machine.go:97] duration metric: took 1m31.832022982s to provisionDockerMachine
	I0621 18:49:59.775056   37614 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:49:59.775071   37614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:49:59.775090   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:49:59.775469   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:49:59.775508   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.778762   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.779252   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.779278   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.779425   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.779621   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.779730   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.779846   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:49:59.861058   37614 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:49:59.865212   37614 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:49:59.865238   37614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:49:59.865306   37614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:49:59.865412   37614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:49:59.865426   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:49:59.865530   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:49:59.874847   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:49:59.898766   37614 start.go:296] duration metric: took 123.693827ms for postStartSetup
	I0621 18:49:59.898814   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:49:59.899163   37614 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0621 18:49:59.899191   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.902342   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.902758   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.902781   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.902968   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.903148   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.903308   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.903440   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	W0621 18:49:59.980000   37614 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0621 18:49:59.980025   37614 fix.go:56] duration metric: took 1m32.058837235s for fixHost
	I0621 18:49:59.980045   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.983376   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.983859   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.983891   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.984114   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.984357   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.984534   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.984719   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.984900   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:49:59.985122   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:49:59.985139   37614 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:50:00.091107   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718995800.019349431
	
	I0621 18:50:00.091140   37614 fix.go:216] guest clock: 1718995800.019349431
	I0621 18:50:00.091157   37614 fix.go:229] Guest: 2024-06-21 18:50:00.019349431 +0000 UTC Remote: 2024-06-21 18:49:59.98003189 +0000 UTC m=+92.182726233 (delta=39.317541ms)
	I0621 18:50:00.091202   37614 fix.go:200] guest clock delta is within tolerance: 39.317541ms
	I0621 18:50:00.091209   37614 start.go:83] releasing machines lock for "ha-406291", held for 1m32.170035409s
	I0621 18:50:00.091239   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.091570   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:50:00.094257   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.094684   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.094714   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.094867   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095587   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095720   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095777   37614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:50:00.095826   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:50:00.095948   37614 ssh_runner.go:195] Run: cat /version.json
	I0621 18:50:00.095969   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:50:00.099018   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099048   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099355   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.099392   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099417   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.099546   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:50:00.099547   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099784   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:50:00.099802   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:50:00.099953   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:50:00.099953   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:50:00.100151   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:50:00.100166   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:50:00.100406   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:50:00.221373   37614 ssh_runner.go:195] Run: systemctl --version
	I0621 18:50:00.227389   37614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:50:00.385205   37614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:50:00.394152   37614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:50:00.394215   37614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:50:00.403823   37614 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0621 18:50:00.403852   37614 start.go:494] detecting cgroup driver to use...
	I0621 18:50:00.403906   37614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:50:00.419979   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:50:00.434440   37614 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:50:00.434502   37614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:50:00.448314   37614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:50:00.462079   37614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:50:00.614685   37614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:50:00.759729   37614 docker.go:233] disabling docker service ...
	I0621 18:50:00.759808   37614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:50:00.777480   37614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:50:00.792874   37614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:50:00.942947   37614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:50:01.096969   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:50:01.111115   37614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:50:01.175106   37614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:50:01.175190   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.232028   37614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:50:01.232101   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.280475   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.294904   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.316249   37614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:50:01.333062   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.348820   37614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.371299   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.389314   37614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:50:01.401788   37614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:50:01.422679   37614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:50:01.648445   37614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:50:02.047527   37614 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:50:02.047604   37614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:50:02.052768   37614 start.go:562] Will wait 60s for crictl version
	I0621 18:50:02.052832   37614 ssh_runner.go:195] Run: which crictl
	I0621 18:50:02.056555   37614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:50:02.094299   37614 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:50:02.094367   37614 ssh_runner.go:195] Run: crio --version
	I0621 18:50:02.123963   37614 ssh_runner.go:195] Run: crio --version
	I0621 18:50:02.156468   37614 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:50:02.158024   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:50:02.161125   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:02.161548   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:02.161570   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:02.161875   37614 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:50:02.167481   37614 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:50:02.167692   37614 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:50:02.167755   37614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:50:02.219832   37614 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:50:02.219854   37614 crio.go:433] Images already preloaded, skipping extraction
	I0621 18:50:02.219899   37614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:50:02.255684   37614 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:50:02.255710   37614 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:50:02.255720   37614 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:50:02.255840   37614 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:50:02.255924   37614 ssh_runner.go:195] Run: crio config
	I0621 18:50:02.317976   37614 cni.go:84] Creating CNI manager for ""
	I0621 18:50:02.317997   37614 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 18:50:02.318008   37614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:50:02.318027   37614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:50:02.318155   37614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:50:02.318171   37614 kube-vip.go:115] generating kube-vip config ...
	I0621 18:50:02.318209   37614 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:50:02.331312   37614 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:50:02.331435   37614 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:50:02.331501   37614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:50:02.342410   37614 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:50:02.342501   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:50:02.353833   37614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:50:02.372067   37614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:50:02.391049   37614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:50:02.409310   37614 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0621 18:50:02.427547   37614 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:50:02.433079   37614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:50:02.582453   37614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:50:02.598236   37614 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:50:02.598258   37614 certs.go:194] generating shared ca certs ...
	I0621 18:50:02.598278   37614 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.598473   37614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:50:02.598527   37614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:50:02.598538   37614 certs.go:256] generating profile certs ...
	I0621 18:50:02.598630   37614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:50:02.598657   37614 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995
	I0621 18:50:02.598668   37614 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:50:02.663764   37614 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 ...
	I0621 18:50:02.663805   37614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995: {Name:mk333c8edf0e5497704ceac44948ed6d5eae057c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.664011   37614 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995 ...
	I0621 18:50:02.664028   37614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995: {Name:mk5cd7253a5d75c3e8a117ab1180e6cf66770645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.664122   37614 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:50:02.664288   37614 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:50:02.664452   37614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:50:02.664473   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:50:02.664492   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:50:02.664510   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:50:02.664528   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:50:02.664544   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:50:02.664558   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:50:02.664575   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:50:02.664593   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:50:02.664653   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:50:02.664692   37614 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:50:02.664704   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:50:02.664743   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:50:02.664779   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:50:02.664808   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:50:02.664862   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:50:02.664896   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:02.664913   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:50:02.664932   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:50:02.665576   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:50:02.694113   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:50:02.722523   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:50:02.749537   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:50:02.776614   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0621 18:50:02.805311   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:50:02.832592   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:50:02.857479   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:50:02.881711   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:50:02.907387   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:50:02.934334   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:50:02.959508   37614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:50:02.977465   37614 ssh_runner.go:195] Run: openssl version
	I0621 18:50:02.983767   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:50:02.995314   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.001937   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.002002   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.009327   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:50:03.022240   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:50:03.037533   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.042517   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.042581   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.048576   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:50:03.059273   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:50:03.071497   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.076360   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.076413   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.082259   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:50:03.092484   37614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:50:03.097277   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0621 18:50:03.103376   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0621 18:50:03.109351   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0621 18:50:03.115157   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0621 18:50:03.120911   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0621 18:50:03.126507   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0621 18:50:03.132154   37614 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:50:03.132279   37614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:50:03.132331   37614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:50:03.170290   37614 cri.go:89] found id: "6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0"
	I0621 18:50:03.170317   37614 cri.go:89] found id: "adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea"
	I0621 18:50:03.170320   37614 cri.go:89] found id: "6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25"
	I0621 18:50:03.170323   37614 cri.go:89] found id: "6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c"
	I0621 18:50:03.170326   37614 cri.go:89] found id: "9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b"
	I0621 18:50:03.170329   37614 cri.go:89] found id: "468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d"
	I0621 18:50:03.170331   37614 cri.go:89] found id: "e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547"
	I0621 18:50:03.170334   37614 cri.go:89] found id: "96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631"
	I0621 18:50:03.170336   37614 cri.go:89] found id: "a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d"
	I0621 18:50:03.170341   37614 cri.go:89] found id: "89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8"
	I0621 18:50:03.170344   37614 cri.go:89] found id: "2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3"
	I0621 18:50:03.170346   37614 cri.go:89] found id: "3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31"
	I0621 18:50:03.170349   37614 cri.go:89] found id: ""
	I0621 18:50:03.170399   37614 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.856228218Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996048856195135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffe2453a-fe7b-486a-aa65-11a616f141ee name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.856878910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea5d7290-3616-42a3-8f52-43087da28ebb name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.856947775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea5d7290-3616-42a3-8f52-43087da28ebb name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.857402896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea5d7290-3616-42a3-8f52-43087da28ebb name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.902901665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e075127-d61f-45d5-a37a-910b2b6d2a02 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.903008394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e075127-d61f-45d5-a37a-910b2b6d2a02 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.904754270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b0bae95-f5ab-4439-845d-b77b09d3128e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.905317307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996048905293555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b0bae95-f5ab-4439-845d-b77b09d3128e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.906013595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa4a2dc7-1743-4af6-9f11-16bb15322f77 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.906067859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa4a2dc7-1743-4af6-9f11-16bb15322f77 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.906712648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa4a2dc7-1743-4af6-9f11-16bb15322f77 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.952422133Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a4ce2e3-f036-4302-851f-521bb85859ec name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.952495629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a4ce2e3-f036-4302-851f-521bb85859ec name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.954312143Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c93f2d0-07bb-47fa-b59d-e2ecd201e809 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.954775500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996048954750731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c93f2d0-07bb-47fa-b59d-e2ecd201e809 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.955395767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5edbfad-c900-4b29-997b-77c786263435 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.955497544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5edbfad-c900-4b29-997b-77c786263435 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:08 ha-406291 crio[4830]: time="2024-06-21 18:54:08.955921315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5edbfad-c900-4b29-997b-77c786263435 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:09 ha-406291 crio[4830]: time="2024-06-21 18:54:09.002578122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87c0983f-3e6f-49dc-ab88-72d6e4cf6f2b name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:09 ha-406291 crio[4830]: time="2024-06-21 18:54:09.002652891Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87c0983f-3e6f-49dc-ab88-72d6e4cf6f2b name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:09 ha-406291 crio[4830]: time="2024-06-21 18:54:09.003622246Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2934fdaa-8085-4c21-8053-6ab9781023f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:09 ha-406291 crio[4830]: time="2024-06-21 18:54:09.004304686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996049004277402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2934fdaa-8085-4c21-8053-6ab9781023f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:09 ha-406291 crio[4830]: time="2024-06-21 18:54:09.004889372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44b91a19-19d2-4ce0-9fcb-d524d304cae7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:09 ha-406291 crio[4830]: time="2024-06-21 18:54:09.004962620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44b91a19-19d2-4ce0-9fcb-d524d304cae7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:09 ha-406291 crio[4830]: time="2024-06-21 18:54:09.005400155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44b91a19-19d2-4ce0-9fcb-d524d304cae7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	09a2e3d098856       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   231b7531a974b       busybox-fc5497c4f-qvl48
	3eb10cac6d1c3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   908bde46281af       coredns-7db6d8ff4d-7ng4v
	c869f01d25b20       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   e10e95f5f35c0       coredns-7db6d8ff4d-nx5xs
	e35ca450b8450       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      3 minutes ago       Running             kube-vip                  0                   8fec4c6e62141       kube-vip-ha-406291
	e41ffe84b8dea       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      4 minutes ago       Running             kindnet-cni               1                   4a9342a5a2eeb       kindnet-vnds7
	246b5b36ac09f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      4 minutes ago       Running             kube-proxy                1                   047b75f8fe402       kube-proxy-xnbqj
	e8dcbcf864ab9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   535a7ff15105f       etcd-ha-406291
	6ce53eeeec0f2       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            1                   bca8e9a757e1c       kube-apiserver-ha-406291
	d59d0df4fcf16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   4e2453ce79440       storage-provisioner
	e9c120a578b20       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   1                   84fbafaf5a0be       kube-controller-manager-ha-406291
	6f2e61853ab78       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      4 minutes ago       Running             kube-scheduler            1                   b77046a9f3508       kube-scheduler-ha-406291
	6bba601718e97       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   ef224dee21646       coredns-7db6d8ff4d-7ng4v
	adf7b4a3e9492       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   3d95d41781333       coredns-7db6d8ff4d-nx5xs
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   25 minutes ago      Exited              busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      26 minutes ago      Exited              storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      26 minutes ago      Exited              kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      26 minutes ago      Exited              kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      26 minutes ago      Exited              kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      26 minutes ago      Exited              etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      26 minutes ago      Exited              kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      26 minutes ago      Exited              kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54228 - 26713 "HINFO IN 4548532589898165947.6437560420477737975. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010774765s
	
	
	==> coredns [6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0] <==
	
	
	==> coredns [adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea] <==
	
	
	==> coredns [c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32776 - 50363 "HINFO IN 2533289171171185985.5104556903785863448. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020452492s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:54:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 26m    kube-proxy       
	  Normal   Starting                 3m46s  kube-proxy       
	  Normal   Starting                 26m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  26m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m    kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26m    kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26m    kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26m    node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal   NodeReady                26m    kubelet          Node ha-406291 status is now: NodeReady
	  Warning  ContainerGCFailed        4m45s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m45s  node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	
	
	Name:               ha-406291-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T18_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:41:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:46:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:47:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:47:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:47:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 21 Jun 2024 18:41:31 +0000   Fri, 21 Jun 2024 18:47:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    ha-406291-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aeb6d6b65b246d89e229cf308cb4c9a
	  System UUID:                7aeb6d6b-65b2-46d8-9e22-9cf308cb4c9a
	  Boot ID:                    077bb108-4737-40c3-9892-3695b5a49d4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drm4v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-xrm6w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-vknv4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-406291-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-406291-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-406291-m03 event: Registered Node ha-406291-m03 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-406291-m03 status is now: NodeReady
	  Normal  NodeNotReady             7m7s               node-controller  Node ha-406291-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m45s              node-controller  Node ha-406291-m03 event: Registered Node ha-406291-m03 in Controller
	
	
	==> dmesg <==
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	[Jun21 18:50] systemd-fstab-generator[4547]: Ignoring "noauto" option for root device
	[  +0.147300] systemd-fstab-generator[4559]: Ignoring "noauto" option for root device
	[  +0.179225] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	[  +0.153967] systemd-fstab-generator[4585]: Ignoring "noauto" option for root device
	[  +0.498288] systemd-fstab-generator[4740]: Ignoring "noauto" option for root device
	[  +0.987159] systemd-fstab-generator[4965]: Ignoring "noauto" option for root device
	[  +4.443961] kauditd_printk_skb: 142 callbacks suppressed
	[ +14.867731] kauditd_printk_skb: 86 callbacks suppressed
	[  +7.940594] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:42:19.558621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1509}
	{"level":"info","ts":"2024-06-21T18:42:19.563203Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1509,"took":"4.232264ms","hash":4134822789,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2011136,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-06-21T18:42:19.563247Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4134822789,"revision":1509,"compact-revision":969}
	{"level":"info","ts":"2024-06-21T18:47:19.567745Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2121}
	{"level":"info","ts":"2024-06-21T18:47:19.578898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2121,"took":"9.848541ms","hash":4103272021,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2158592,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-21T18:47:19.579002Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4103272021,"revision":2121,"compact-revision":1509}
	{"level":"info","ts":"2024-06-21T18:48:28.996649Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-21T18:48:28.997685Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-406291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"]}
	{"level":"warn","ts":"2024-06-21T18:48:28.997914Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/06/21 18:48:28 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-21T18:48:29.019664Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T18:48:29.07084Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.198:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T18:48:29.070996Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.198:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-21T18:48:29.071071Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f1d2ab5330a2a0e3","current-leader-member-id":"f1d2ab5330a2a0e3"}
	{"level":"info","ts":"2024-06-21T18:48:29.073709Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:48:29.073927Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:48:29.073993Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-406291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"]}
	
	
	==> etcd [e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498] <==
	{"level":"info","ts":"2024-06-21T18:50:08.468075Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T18:50:08.468105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T18:50:08.501093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 switched to configuration voters=(17425178282036469987)"}
	{"level":"info","ts":"2024-06-21T18:50:08.50936Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","added-peer-id":"f1d2ab5330a2a0e3","added-peer-peer-urls":["https://192.168.39.198:2380"]}
	{"level":"info","ts":"2024-06-21T18:50:08.509531Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:50:08.509572Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:50:08.501761Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-21T18:50:08.529317Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f1d2ab5330a2a0e3","initial-advertise-peer-urls":["https://192.168.39.198:2380"],"listen-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.198:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-21T18:50:08.529422Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-21T18:50:08.501793Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:50:08.529674Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:50:10.027082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-21T18:50:10.02726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:50:10.027346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:50:10.027392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.027417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.027444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.027474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.029196Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:50:10.029242Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:50:10.02933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:50:10.02982Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:50:10.029851Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:50:10.031528Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:50:10.031596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	
	
	==> kernel <==
	 18:54:09 up 27 min,  0 users,  load average: 0.21, 0.33, 0.25
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:47:19.889708       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:29.896242       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:29.896507       1 main.go:227] handling current node
	I0621 18:47:29.896581       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:29.896607       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:39.900437       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:39.900471       1 main.go:227] handling current node
	I0621 18:47:39.900481       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:39.900486       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:49.910179       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:49.910364       1 main.go:227] handling current node
	I0621 18:47:49.910412       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:49.910433       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:59.920904       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:59.921055       1 main.go:227] handling current node
	I0621 18:47:59.921083       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:59.921104       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:48:09.925491       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:48:09.925574       1 main.go:227] handling current node
	I0621 18:48:09.925596       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:48:09.925612       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:48:19.931901       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:48:19.931924       1 main.go:227] handling current node
	I0621 18:48:19.931934       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:48:19.931948       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f] <==
	I0621 18:53:01.669662       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:11.676026       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:11.676256       1 main.go:227] handling current node
	I0621 18:53:11.676299       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:11.677083       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:21.681340       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:21.681491       1 main.go:227] handling current node
	I0621 18:53:21.681517       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:21.681535       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:31.688278       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:31.688318       1 main.go:227] handling current node
	I0621 18:53:31.688332       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:31.688338       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:41.701842       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:41.701885       1 main.go:227] handling current node
	I0621 18:53:41.701909       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:41.701915       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:51.716954       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:51.717674       1 main.go:227] handling current node
	I0621 18:53:51.717721       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:51.717779       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:54:01.725293       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:54:01.725480       1 main.go:227] handling current node
	I0621 18:54:01.725509       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:54:01.725528       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:48:29.003941       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0621 18:48:29.003974       1 establishing_controller.go:87] Shutting down EstablishingController
	I0621 18:48:29.004016       1 naming_controller.go:302] Shutting down NamingConditionController
	I0621 18:48:29.004054       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0621 18:48:29.004093       1 controller.go:167] Shutting down OpenAPI controller
	I0621 18:48:29.004170       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0621 18:48:29.004222       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0621 18:48:29.004270       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0621 18:48:29.004356       1 controller.go:129] Ending legacy_token_tracking_controller
	I0621 18:48:29.004425       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0621 18:48:29.004499       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0621 18:48:29.004582       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0621 18:48:29.004661       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0621 18:48:29.005398       1 available_controller.go:439] Shutting down AvailableConditionController
	I0621 18:48:29.005443       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0621 18:48:29.009516       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0621 18:48:29.014355       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0621 18:48:29.017571       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0621 18:48:29.018587       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0621 18:48:29.018611       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0621 18:48:29.018651       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0621 18:48:29.018710       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0621 18:48:29.018731       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0621 18:48:29.022079       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0621 18:48:29.024248       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d] <==
	I0621 18:50:11.388689       1 controller.go:87] Starting OpenAPI V3 controller
	I0621 18:50:11.388786       1 naming_controller.go:291] Starting NamingConditionController
	I0621 18:50:11.388849       1 establishing_controller.go:76] Starting EstablishingController
	I0621 18:50:11.388914       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0621 18:50:11.388976       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0621 18:50:11.389024       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0621 18:50:11.459446       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 18:50:11.461317       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:50:11.461355       1 policy_source.go:224] refreshing policies
	I0621 18:50:11.462236       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0621 18:50:11.462495       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0621 18:50:11.462570       1 shared_informer.go:320] Caches are synced for configmaps
	I0621 18:50:11.462620       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0621 18:50:11.462560       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0621 18:50:11.463762       1 aggregator.go:165] initial CRD sync complete...
	I0621 18:50:11.463819       1 autoregister_controller.go:141] Starting autoregister controller
	I0621 18:50:11.463843       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0621 18:50:11.463901       1 cache.go:39] Caches are synced for autoregister controller
	I0621 18:50:11.464074       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0621 18:50:11.465293       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0621 18:50:11.469748       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0621 18:50:11.553642       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:50:12.365967       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:50:24.661126       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:50:24.756657       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	I0621 18:47:02.153491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.967868ms"
	I0621 18:47:02.153669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.935µs"
	
	
	==> kube-controller-manager [e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204] <==
	I0621 18:50:24.543483       1 shared_informer.go:320] Caches are synced for deployment
	I0621 18:50:24.546612       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0621 18:50:24.546752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.258µs"
	I0621 18:50:24.551547       1 shared_informer.go:320] Caches are synced for endpoint
	I0621 18:50:24.553356       1 shared_informer.go:320] Caches are synced for daemon sets
	I0621 18:50:24.553388       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0621 18:50:24.554288       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0621 18:50:24.556627       1 shared_informer.go:320] Caches are synced for stateful set
	I0621 18:50:24.558593       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0621 18:50:24.567415       1 shared_informer.go:320] Caches are synced for attach detach
	I0621 18:50:24.567453       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 18:50:24.586989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.232569ms"
	I0621 18:50:24.587087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.533µs"
	I0621 18:50:24.602738       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0621 18:50:24.603613       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0621 18:50:24.603724       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0621 18:50:24.603738       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0621 18:50:24.652886       1 shared_informer.go:320] Caches are synced for persistent volume
	I0621 18:50:24.653029       1 shared_informer.go:320] Caches are synced for PV protection
	I0621 18:50:25.040469       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:50:25.040558       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0621 18:50:25.050749       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:50:29.659533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.629839ms"
	I0621 18:50:29.659680       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.553µs"
	I0621 18:50:45.265661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.751µs"
	
	
	==> kube-proxy [246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52] <==
	I0621 18:50:09.288398       1 server_linux.go:69] "Using iptables proxy"
	E0621 18:50:12.442279       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-406291\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0621 18:50:15.512951       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-406291\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0621 18:50:18.585517       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-406291\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0621 18:50:22.984302       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:50:23.021021       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:50:23.021181       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:50:23.021227       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:50:23.023762       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:50:23.024088       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:50:23.024245       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:50:23.025824       1 config.go:192] "Starting service config controller"
	I0621 18:50:23.025902       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:50:23.025971       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:50:23.025989       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:50:23.026706       1 config.go:319] "Starting node config controller"
	I0621 18:50:23.026831       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:50:23.127003       1 shared_informer.go:320] Caches are synced for node config
	I0621 18:50:23.127050       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:50:23.127115       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374] <==
	I0621 18:50:08.290679       1 serving.go:380] Generated self-signed cert in-memory
	W0621 18:50:11.414815       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0621 18:50:11.414966       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:50:11.415056       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0621 18:50:11.415082       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0621 18:50:11.447211       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0621 18:50:11.448436       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:50:11.456933       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0621 18:50:11.457032       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 18:50:11.457077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0621 18:50:11.460859       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0621 18:50:11.557723       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 18:48:28.987861       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0621 18:48:28.987988       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0621 18:48:28.988601       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 21 18:50:22 ha-406291 kubelet[1367]: I0621 18:50:22.421287    1367 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-406291"
	Jun 21 18:50:22 ha-406291 kubelet[1367]: I0621 18:50:22.432917    1367 scope.go:117] "RemoveContainer" containerID="adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea"
	Jun 21 18:50:22 ha-406291 kubelet[1367]: I0621 18:50:22.434123    1367 scope.go:117] "RemoveContainer" containerID="6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0"
	Jun 21 18:50:24 ha-406291 kubelet[1367]: E0621 18:50:24.491904    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:50:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:50:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:50:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:50:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:51:24 ha-406291 kubelet[1367]: E0621 18:51:24.484207    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:51:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:51:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:51:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:51:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:51:26 ha-406291 kubelet[1367]: I0621 18:51:26.432644    1367 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-406291" podUID="48932727-9ffb-476e-8b2a-ee40959393c5"
	Jun 21 18:51:49 ha-406291 kubelet[1367]: I0621 18:51:49.719495    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-qvl48" podStartSLOduration=1370.151479628 podStartE2EDuration="22m52.719456002s" podCreationTimestamp="2024-06-21 18:28:57 +0000 UTC" firstStartedPulling="2024-06-21 18:28:57.551504492 +0000 UTC m=+93.252502721" lastFinishedPulling="2024-06-21 18:29:00.119480863 +0000 UTC m=+95.820479095" observedRunningTime="2024-06-21 18:29:00.862800003 +0000 UTC m=+96.563798241" watchObservedRunningTime="2024-06-21 18:51:49.719456002 +0000 UTC m=+1465.420454249"
	Jun 21 18:52:24 ha-406291 kubelet[1367]: E0621 18:52:24.483755    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:52:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:52:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:52:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:52:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:53:24 ha-406291 kubelet[1367]: E0621 18:53:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:53:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:53:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:53:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:53:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0621 18:54:08.617569   38897 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19112-8111/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m58s                default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  3m48s                default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  14m (x3 over 25m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  7m46s (x3 over 13m)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (463.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 node delete m03 -v=7 --alsologtostderr: (7.115143497s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: exit status 2 (407.942838ms)

                                                
                                                
-- stdout --
	ha-406291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406291-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:54:17.644412   39108 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:54:17.644654   39108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:54:17.644664   39108 out.go:304] Setting ErrFile to fd 2...
	I0621 18:54:17.644670   39108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:54:17.644855   39108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:54:17.645041   39108 out.go:298] Setting JSON to false
	I0621 18:54:17.645068   39108 mustload.go:65] Loading cluster: ha-406291
	I0621 18:54:17.645180   39108 notify.go:220] Checking for updates...
	I0621 18:54:17.645459   39108 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:54:17.645475   39108 status.go:255] checking status of ha-406291 ...
	I0621 18:54:17.645879   39108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:54:17.645954   39108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:54:17.660528   39108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33525
	I0621 18:54:17.660982   39108 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:54:17.661539   39108 main.go:141] libmachine: Using API Version  1
	I0621 18:54:17.661560   39108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:54:17.661959   39108 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:54:17.662153   39108 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:54:17.663979   39108 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:54:17.664006   39108 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:54:17.664293   39108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:54:17.664324   39108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:54:17.679504   39108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45817
	I0621 18:54:17.679932   39108 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:54:17.680465   39108 main.go:141] libmachine: Using API Version  1
	I0621 18:54:17.680484   39108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:54:17.680751   39108 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:54:17.680903   39108 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:54:17.683581   39108 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:54:17.684016   39108 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:54:17.684044   39108 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:54:17.684171   39108 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:54:17.684434   39108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:54:17.684464   39108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:54:17.698646   39108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I0621 18:54:17.698986   39108 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:54:17.699403   39108 main.go:141] libmachine: Using API Version  1
	I0621 18:54:17.699421   39108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:54:17.699697   39108 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:54:17.699869   39108 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:54:17.700021   39108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:54:17.700046   39108 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:54:17.702600   39108 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:54:17.702959   39108 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:54:17.702983   39108 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:54:17.703145   39108 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:54:17.703279   39108 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:54:17.703413   39108 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:54:17.703528   39108 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:54:17.781192   39108 ssh_runner.go:195] Run: systemctl --version
	I0621 18:54:17.787387   39108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:54:17.801403   39108 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:54:17.801433   39108 api_server.go:166] Checking apiserver status ...
	I0621 18:54:17.801463   39108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 18:54:17.815027   39108 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5520/cgroup
	W0621 18:54:17.823844   39108 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5520/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:54:17.823894   39108 ssh_runner.go:195] Run: ls
	I0621 18:54:17.827642   39108 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0621 18:54:17.831682   39108 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0621 18:54:17.831701   39108 status.go:422] ha-406291 apiserver status = Running (err=<nil>)
	I0621 18:54:17.831710   39108 status.go:257] ha-406291 status: &{Name:ha-406291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 18:54:17.831729   39108 status.go:255] checking status of ha-406291-m02 ...
	I0621 18:54:17.832005   39108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:54:17.832033   39108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:54:17.846783   39108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
	I0621 18:54:17.847195   39108 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:54:17.847690   39108 main.go:141] libmachine: Using API Version  1
	I0621 18:54:17.847725   39108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:54:17.848066   39108 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:54:17.848282   39108 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:54:17.849945   39108 status.go:330] ha-406291-m02 host status = "Running" (err=<nil>)
	I0621 18:54:17.849961   39108 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:54:17.850245   39108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:54:17.850277   39108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:54:17.864714   39108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
	I0621 18:54:17.865095   39108 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:54:17.865590   39108 main.go:141] libmachine: Using API Version  1
	I0621 18:54:17.865613   39108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:54:17.865947   39108 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:54:17.866128   39108 main.go:141] libmachine: (ha-406291-m02) Calling .GetIP
	I0621 18:54:17.868760   39108 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:54:17.869121   39108 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:54:17.869136   39108 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:54:17.869302   39108 host.go:66] Checking if "ha-406291-m02" exists ...
	I0621 18:54:17.869615   39108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:54:17.869656   39108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:54:17.883836   39108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46875
	I0621 18:54:17.884243   39108 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:54:17.884671   39108 main.go:141] libmachine: Using API Version  1
	I0621 18:54:17.884707   39108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:54:17.885012   39108 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:54:17.885165   39108 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:54:17.885293   39108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:54:17.885309   39108 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:54:17.887806   39108 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:54:17.888198   39108 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:54:17.888221   39108 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:54:17.888377   39108 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:54:17.888531   39108 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:54:17.888670   39108 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:54:17.888793   39108 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:54:17.976975   39108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 18:54:17.994198   39108 kubeconfig.go:125] found "ha-406291" server: "https://192.168.39.254:8443"
	I0621 18:54:17.994225   39108 api_server.go:166] Checking apiserver status ...
	I0621 18:54:17.994253   39108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0621 18:54:18.008115   39108 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0621 18:54:18.008139   39108 status.go:422] ha-406291-m02 apiserver status = Stopped (err=<nil>)
	I0621 18:54:18.008151   39108 status.go:257] ha-406291-m02 status: &{Name:ha-406291-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.449579194s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node stop m02 -v=7         | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node start m02 -v=7        | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-406291 -v=7               | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:46 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-406291 -v=7                    | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:46 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-406291 --wait=true -v=7        | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:48 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-406291                    | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:54 UTC |                     |
	| node    | ha-406291 node delete m03 -v=7       | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:54 UTC | 21 Jun 24 18:54 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:48:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:48:27.831476   37614 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:48:27.831947   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:48:27.831958   37614 out.go:304] Setting ErrFile to fd 2...
	I0621 18:48:27.831963   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:48:27.832237   37614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:48:27.832938   37614 out.go:298] Setting JSON to false
	I0621 18:48:27.833836   37614 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5406,"bootTime":1718990302,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:48:27.833898   37614 start.go:139] virtualization: kvm guest
	I0621 18:48:27.836380   37614 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:48:27.837785   37614 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:48:27.837821   37614 notify.go:220] Checking for updates...
	I0621 18:48:27.840567   37614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:48:27.841953   37614 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:48:27.843187   37614 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:48:27.844558   37614 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:48:27.845907   37614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:48:27.847613   37614 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:48:27.847732   37614 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:48:27.848413   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:48:27.848482   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:48:27.863080   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0621 18:48:27.863473   37614 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:48:27.864007   37614 main.go:141] libmachine: Using API Version  1
	I0621 18:48:27.864033   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:48:27.864411   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:48:27.864641   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.900101   37614 out.go:177] * Using the kvm2 driver based on existing profile
	I0621 18:48:27.901277   37614 start.go:297] selected driver: kvm2
	I0621 18:48:27.901299   37614 start.go:901] validating driver "kvm2" against &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:48:27.901441   37614 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:48:27.901750   37614 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:48:27.901843   37614 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:48:27.916614   37614 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:48:27.917318   37614 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:48:27.917379   37614 cni.go:84] Creating CNI manager for ""
	I0621 18:48:27.917391   37614 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 18:48:27.917453   37614 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:48:27.917576   37614 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:48:27.919430   37614 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:48:27.920610   37614 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:48:27.920649   37614 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:48:27.920659   37614 cache.go:56] Caching tarball of preloaded images
	I0621 18:48:27.920773   37614 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:48:27.920787   37614 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:48:27.920894   37614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:48:27.921114   37614 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:48:27.921161   37614 start.go:364] duration metric: took 28.141µs to acquireMachinesLock for "ha-406291"
	I0621 18:48:27.921180   37614 start.go:96] Skipping create...Using existing machine configuration
	I0621 18:48:27.921190   37614 fix.go:54] fixHost starting: 
	I0621 18:48:27.921463   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:48:27.921500   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:48:27.936449   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33963
	I0621 18:48:27.936960   37614 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:48:27.937520   37614 main.go:141] libmachine: Using API Version  1
	I0621 18:48:27.937546   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:48:27.937916   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:48:27.938097   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.938231   37614 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:48:27.939757   37614 fix.go:112] recreateIfNeeded on ha-406291: state=Running err=<nil>
	W0621 18:48:27.939772   37614 fix.go:138] unexpected machine state, will restart: <nil>
	I0621 18:48:27.941724   37614 out.go:177] * Updating the running kvm2 "ha-406291" VM ...
	I0621 18:48:27.942997   37614 machine.go:94] provisionDockerMachine start ...
	I0621 18:48:27.943024   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.943206   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:27.945749   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:27.946257   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:27.946287   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:27.946456   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:27.946613   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:27.946788   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:27.946925   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:27.947091   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:27.947292   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:27.947307   37614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0621 18:48:28.051086   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:48:28.051116   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.051394   37614 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:48:28.051420   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.051618   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.054638   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.055076   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.055099   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.055296   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.055524   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.055672   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.055901   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.056090   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.056290   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.056305   37614 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:48:28.169279   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:48:28.169305   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.171914   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.172264   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.172307   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.172459   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.172637   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.172764   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.172937   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.173112   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.173334   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.173358   37614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:48:28.270684   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:48:28.270733   37614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:48:28.270776   37614 buildroot.go:174] setting up certificates
	I0621 18:48:28.270798   37614 provision.go:84] configureAuth start
	I0621 18:48:28.270816   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.271110   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:48:28.274048   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.274413   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.274440   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.274625   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.276911   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.277237   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.277273   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.277425   37614 provision.go:143] copyHostCerts
	I0621 18:48:28.277474   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:48:28.277514   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:48:28.277525   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:48:28.277586   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:48:28.277681   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:48:28.277699   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:48:28.277706   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:48:28.277732   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:48:28.277852   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:48:28.277874   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:48:28.277881   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:48:28.277908   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:48:28.277967   37614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:48:28.770044   37614 provision.go:177] copyRemoteCerts
	I0621 18:48:28.770118   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:48:28.770140   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.772531   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.772859   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.772888   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.773061   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.773274   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.773406   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.773544   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:48:28.851817   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:48:28.851907   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:48:28.875949   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:48:28.876034   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:48:28.899404   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:48:28.899479   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0621 18:48:28.922832   37614 provision.go:87] duration metric: took 652.015125ms to configureAuth
	I0621 18:48:28.922865   37614 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:48:28.923083   37614 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:48:28.923147   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.925724   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.926104   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.926143   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.926302   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.926538   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.926671   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.926850   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.926962   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.927117   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.927134   37614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:49:59.775008   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:49:59.775041   37614 machine.go:97] duration metric: took 1m31.832022982s to provisionDockerMachine
	I0621 18:49:59.775056   37614 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:49:59.775071   37614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:49:59.775090   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:49:59.775469   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:49:59.775508   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.778762   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.779252   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.779278   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.779425   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.779621   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.779730   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.779846   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:49:59.861058   37614 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:49:59.865212   37614 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:49:59.865238   37614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:49:59.865306   37614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:49:59.865412   37614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:49:59.865426   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:49:59.865530   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:49:59.874847   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:49:59.898766   37614 start.go:296] duration metric: took 123.693827ms for postStartSetup
	I0621 18:49:59.898814   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:49:59.899163   37614 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0621 18:49:59.899191   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.902342   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.902758   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.902781   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.902968   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.903148   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.903308   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.903440   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	W0621 18:49:59.980000   37614 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0621 18:49:59.980025   37614 fix.go:56] duration metric: took 1m32.058837235s for fixHost
	I0621 18:49:59.980045   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.983376   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.983859   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.983891   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.984114   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.984357   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.984534   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.984719   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.984900   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:49:59.985122   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:49:59.985139   37614 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:50:00.091107   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718995800.019349431
	
	I0621 18:50:00.091140   37614 fix.go:216] guest clock: 1718995800.019349431
	I0621 18:50:00.091157   37614 fix.go:229] Guest: 2024-06-21 18:50:00.019349431 +0000 UTC Remote: 2024-06-21 18:49:59.98003189 +0000 UTC m=+92.182726233 (delta=39.317541ms)
	I0621 18:50:00.091202   37614 fix.go:200] guest clock delta is within tolerance: 39.317541ms
	I0621 18:50:00.091209   37614 start.go:83] releasing machines lock for "ha-406291", held for 1m32.170035409s
	I0621 18:50:00.091239   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.091570   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:50:00.094257   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.094684   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.094714   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.094867   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095587   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095720   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095777   37614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:50:00.095826   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:50:00.095948   37614 ssh_runner.go:195] Run: cat /version.json
	I0621 18:50:00.095969   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:50:00.099018   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099048   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099355   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.099392   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099417   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.099546   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:50:00.099547   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099784   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:50:00.099802   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:50:00.099953   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:50:00.099953   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:50:00.100151   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:50:00.100166   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:50:00.100406   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:50:00.221373   37614 ssh_runner.go:195] Run: systemctl --version
	I0621 18:50:00.227389   37614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:50:00.385205   37614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:50:00.394152   37614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:50:00.394215   37614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:50:00.403823   37614 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0621 18:50:00.403852   37614 start.go:494] detecting cgroup driver to use...
	I0621 18:50:00.403906   37614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:50:00.419979   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:50:00.434440   37614 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:50:00.434502   37614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:50:00.448314   37614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:50:00.462079   37614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:50:00.614685   37614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:50:00.759729   37614 docker.go:233] disabling docker service ...
	I0621 18:50:00.759808   37614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:50:00.777480   37614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:50:00.792874   37614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:50:00.942947   37614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:50:01.096969   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:50:01.111115   37614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:50:01.175106   37614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:50:01.175190   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.232028   37614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:50:01.232101   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.280475   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.294904   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.316249   37614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:50:01.333062   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.348820   37614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.371299   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.389314   37614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:50:01.401788   37614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:50:01.422679   37614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:50:01.648445   37614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:50:02.047527   37614 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:50:02.047604   37614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:50:02.052768   37614 start.go:562] Will wait 60s for crictl version
	I0621 18:50:02.052832   37614 ssh_runner.go:195] Run: which crictl
	I0621 18:50:02.056555   37614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:50:02.094299   37614 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:50:02.094367   37614 ssh_runner.go:195] Run: crio --version
	I0621 18:50:02.123963   37614 ssh_runner.go:195] Run: crio --version
	I0621 18:50:02.156468   37614 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:50:02.158024   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:50:02.161125   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:02.161548   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:02.161570   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:02.161875   37614 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:50:02.167481   37614 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:50:02.167692   37614 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:50:02.167755   37614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:50:02.219832   37614 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:50:02.219854   37614 crio.go:433] Images already preloaded, skipping extraction
	I0621 18:50:02.219899   37614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:50:02.255684   37614 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:50:02.255710   37614 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:50:02.255720   37614 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:50:02.255840   37614 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:50:02.255924   37614 ssh_runner.go:195] Run: crio config
	I0621 18:50:02.317976   37614 cni.go:84] Creating CNI manager for ""
	I0621 18:50:02.317997   37614 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 18:50:02.318008   37614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:50:02.318027   37614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:50:02.318155   37614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:50:02.318171   37614 kube-vip.go:115] generating kube-vip config ...
	I0621 18:50:02.318209   37614 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:50:02.331312   37614 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:50:02.331435   37614 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:50:02.331501   37614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:50:02.342410   37614 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:50:02.342501   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:50:02.353833   37614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:50:02.372067   37614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:50:02.391049   37614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:50:02.409310   37614 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0621 18:50:02.427547   37614 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:50:02.433079   37614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:50:02.582453   37614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:50:02.598236   37614 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:50:02.598258   37614 certs.go:194] generating shared ca certs ...
	I0621 18:50:02.598278   37614 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.598473   37614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:50:02.598527   37614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:50:02.598538   37614 certs.go:256] generating profile certs ...
	I0621 18:50:02.598630   37614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:50:02.598657   37614 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995
	I0621 18:50:02.598668   37614 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:50:02.663764   37614 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 ...
	I0621 18:50:02.663805   37614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995: {Name:mk333c8edf0e5497704ceac44948ed6d5eae057c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.664011   37614 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995 ...
	I0621 18:50:02.664028   37614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995: {Name:mk5cd7253a5d75c3e8a117ab1180e6cf66770645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.664122   37614 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:50:02.664288   37614 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:50:02.664452   37614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:50:02.664473   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:50:02.664492   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:50:02.664510   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:50:02.664528   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:50:02.664544   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:50:02.664558   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:50:02.664575   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:50:02.664593   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:50:02.664653   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:50:02.664692   37614 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:50:02.664704   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:50:02.664743   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:50:02.664779   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:50:02.664808   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:50:02.664862   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:50:02.664896   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:02.664913   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:50:02.664932   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:50:02.665576   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:50:02.694113   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:50:02.722523   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:50:02.749537   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:50:02.776614   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0621 18:50:02.805311   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:50:02.832592   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:50:02.857479   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:50:02.881711   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:50:02.907387   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:50:02.934334   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:50:02.959508   37614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:50:02.977465   37614 ssh_runner.go:195] Run: openssl version
	I0621 18:50:02.983767   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:50:02.995314   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.001937   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.002002   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.009327   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:50:03.022240   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:50:03.037533   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.042517   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.042581   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.048576   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:50:03.059273   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:50:03.071497   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.076360   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.076413   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.082259   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:50:03.092484   37614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:50:03.097277   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0621 18:50:03.103376   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0621 18:50:03.109351   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0621 18:50:03.115157   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0621 18:50:03.120911   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0621 18:50:03.126507   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0621 18:50:03.132154   37614 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:50:03.132279   37614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:50:03.132331   37614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:50:03.170290   37614 cri.go:89] found id: "6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0"
	I0621 18:50:03.170317   37614 cri.go:89] found id: "adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea"
	I0621 18:50:03.170320   37614 cri.go:89] found id: "6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25"
	I0621 18:50:03.170323   37614 cri.go:89] found id: "6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c"
	I0621 18:50:03.170326   37614 cri.go:89] found id: "9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b"
	I0621 18:50:03.170329   37614 cri.go:89] found id: "468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d"
	I0621 18:50:03.170331   37614 cri.go:89] found id: "e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547"
	I0621 18:50:03.170334   37614 cri.go:89] found id: "96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631"
	I0621 18:50:03.170336   37614 cri.go:89] found id: "a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d"
	I0621 18:50:03.170341   37614 cri.go:89] found id: "89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8"
	I0621 18:50:03.170344   37614 cri.go:89] found id: "2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3"
	I0621 18:50:03.170346   37614 cri.go:89] found id: "3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31"
	I0621 18:50:03.170349   37614 cri.go:89] found id: ""
	I0621 18:50:03.170399   37614 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.566605625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996058566583726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=979c4ebe-86c4-4a9f-a0b8-6d5968f79d77 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.567115779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3aecb126-5829-46a4-9677-dbbdb15ca547 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.567218831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3aecb126-5829-46a4-9677-dbbdb15ca547 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.567627156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3aecb126-5829-46a4-9677-dbbdb15ca547 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.608659164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=357d932b-5f4c-48d6-adbc-02d6aa79bdd8 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.608737554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=357d932b-5f4c-48d6-adbc-02d6aa79bdd8 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.610126397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a0f3ebb-c522-41cb-b7aa-f43b8c810482 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.610772655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996058610735439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a0f3ebb-c522-41cb-b7aa-f43b8c810482 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.611426392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f2d018e-b8d3-46c2-adb1-70bda39744ed name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.611484583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f2d018e-b8d3-46c2-adb1-70bda39744ed name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.611879612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f2d018e-b8d3-46c2-adb1-70bda39744ed name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.652525674Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffb2c8a2-242d-4563-81f7-40fb86db942d name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.652604575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffb2c8a2-242d-4563-81f7-40fb86db942d name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.653962206Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd94cfb2-96da-4d33-91ff-075ec2cffb56 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.654432335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996058654404977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd94cfb2-96da-4d33-91ff-075ec2cffb56 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.655009406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b75c66c0-204b-43b1-8b5f-d10f7e16cb9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.655066573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b75c66c0-204b-43b1-8b5f-d10f7e16cb9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.655540691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b75c66c0-204b-43b1-8b5f-d10f7e16cb9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.700321818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17a1bd6d-df08-4389-96fc-c7664143d00d name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.700451025Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17a1bd6d-df08-4389-96fc-c7664143d00d name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.701914552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90a51476-3193-4d5d-bff2-7c99c4805433 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.702946300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996058702911686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90a51476-3193-4d5d-bff2-7c99c4805433 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.704427546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ceccdaff-2b8f-4aef-9336-3b02da5d6a7b name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.704533714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ceccdaff-2b8f-4aef-9336-3b02da5d6a7b name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:18 ha-406291 crio[4830]: time="2024-06-21 18:54:18.705092084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ceccdaff-2b8f-4aef-9336-3b02da5d6a7b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	09a2e3d098856       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   231b7531a974b       busybox-fc5497c4f-qvl48
	3eb10cac6d1c3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   908bde46281af       coredns-7db6d8ff4d-7ng4v
	c869f01d25b20       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   e10e95f5f35c0       coredns-7db6d8ff4d-nx5xs
	e35ca450b8450       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      3 minutes ago       Running             kube-vip                  0                   8fec4c6e62141       kube-vip-ha-406291
	e41ffe84b8dea       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      4 minutes ago       Running             kindnet-cni               1                   4a9342a5a2eeb       kindnet-vnds7
	246b5b36ac09f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      4 minutes ago       Running             kube-proxy                1                   047b75f8fe402       kube-proxy-xnbqj
	e8dcbcf864ab9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   535a7ff15105f       etcd-ha-406291
	6ce53eeeec0f2       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            1                   bca8e9a757e1c       kube-apiserver-ha-406291
	d59d0df4fcf16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   4e2453ce79440       storage-provisioner
	e9c120a578b20       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   1                   84fbafaf5a0be       kube-controller-manager-ha-406291
	6f2e61853ab78       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      4 minutes ago       Running             kube-scheduler            1                   b77046a9f3508       kube-scheduler-ha-406291
	6bba601718e97       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   ef224dee21646       coredns-7db6d8ff4d-7ng4v
	adf7b4a3e9492       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   3d95d41781333       coredns-7db6d8ff4d-nx5xs
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   25 minutes ago      Exited              busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      26 minutes ago      Exited              storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      26 minutes ago      Exited              kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      26 minutes ago      Exited              kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      27 minutes ago      Exited              kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      27 minutes ago      Exited              etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      27 minutes ago      Exited              kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      27 minutes ago      Exited              kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54228 - 26713 "HINFO IN 4548532589898165947.6437560420477737975. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010774765s
	
	
	==> coredns [6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0] <==
	
	
	==> coredns [adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea] <==
	
	
	==> coredns [c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32776 - 50363 "HINFO IN 2533289171171185985.5104556903785863448. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020452492s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:54:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 26m    kube-proxy       
	  Normal   Starting                 3m56s  kube-proxy       
	  Normal   Starting                 26m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  26m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m    kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26m    kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26m    kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26m    node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal   NodeReady                26m    kubelet          Node ha-406291 status is now: NodeReady
	  Warning  ContainerGCFailed        4m55s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m55s  node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	
	
	==> dmesg <==
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	[Jun21 18:50] systemd-fstab-generator[4547]: Ignoring "noauto" option for root device
	[  +0.147300] systemd-fstab-generator[4559]: Ignoring "noauto" option for root device
	[  +0.179225] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	[  +0.153967] systemd-fstab-generator[4585]: Ignoring "noauto" option for root device
	[  +0.498288] systemd-fstab-generator[4740]: Ignoring "noauto" option for root device
	[  +0.987159] systemd-fstab-generator[4965]: Ignoring "noauto" option for root device
	[  +4.443961] kauditd_printk_skb: 142 callbacks suppressed
	[ +14.867731] kauditd_printk_skb: 86 callbacks suppressed
	[  +7.940594] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:42:19.558621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1509}
	{"level":"info","ts":"2024-06-21T18:42:19.563203Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1509,"took":"4.232264ms","hash":4134822789,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2011136,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-06-21T18:42:19.563247Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4134822789,"revision":1509,"compact-revision":969}
	{"level":"info","ts":"2024-06-21T18:47:19.567745Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2121}
	{"level":"info","ts":"2024-06-21T18:47:19.578898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2121,"took":"9.848541ms","hash":4103272021,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2158592,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-21T18:47:19.579002Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4103272021,"revision":2121,"compact-revision":1509}
	{"level":"info","ts":"2024-06-21T18:48:28.996649Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-21T18:48:28.997685Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-406291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"]}
	{"level":"warn","ts":"2024-06-21T18:48:28.997914Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/06/21 18:48:28 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-21T18:48:29.019664Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T18:48:29.07084Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.198:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T18:48:29.070996Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.198:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-21T18:48:29.071071Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f1d2ab5330a2a0e3","current-leader-member-id":"f1d2ab5330a2a0e3"}
	{"level":"info","ts":"2024-06-21T18:48:29.073709Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:48:29.073927Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:48:29.073993Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-406291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"]}
	
	
	==> etcd [e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498] <==
	{"level":"info","ts":"2024-06-21T18:50:08.468075Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T18:50:08.468105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T18:50:08.501093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 switched to configuration voters=(17425178282036469987)"}
	{"level":"info","ts":"2024-06-21T18:50:08.50936Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","added-peer-id":"f1d2ab5330a2a0e3","added-peer-peer-urls":["https://192.168.39.198:2380"]}
	{"level":"info","ts":"2024-06-21T18:50:08.509531Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:50:08.509572Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:50:08.501761Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-21T18:50:08.529317Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f1d2ab5330a2a0e3","initial-advertise-peer-urls":["https://192.168.39.198:2380"],"listen-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.198:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-21T18:50:08.529422Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-21T18:50:08.501793Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:50:08.529674Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:50:10.027082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-21T18:50:10.02726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:50:10.027346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:50:10.027392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.027417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.027444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.027474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.029196Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:50:10.029242Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:50:10.02933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:50:10.02982Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:50:10.029851Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:50:10.031528Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:50:10.031596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	
	
	==> kernel <==
	 18:54:19 up 27 min,  0 users,  load average: 0.25, 0.34, 0.25
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:47:19.889708       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:29.896242       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:29.896507       1 main.go:227] handling current node
	I0621 18:47:29.896581       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:29.896607       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:39.900437       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:39.900471       1 main.go:227] handling current node
	I0621 18:47:39.900481       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:39.900486       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:49.910179       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:49.910364       1 main.go:227] handling current node
	I0621 18:47:49.910412       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:49.910433       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:59.920904       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:59.921055       1 main.go:227] handling current node
	I0621 18:47:59.921083       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:59.921104       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:48:09.925491       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:48:09.925574       1 main.go:227] handling current node
	I0621 18:48:09.925596       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:48:09.925612       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:48:19.931901       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:48:19.931924       1 main.go:227] handling current node
	I0621 18:48:19.931934       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:48:19.931948       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f] <==
	I0621 18:53:11.677083       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:21.681340       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:21.681491       1 main.go:227] handling current node
	I0621 18:53:21.681517       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:21.681535       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:31.688278       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:31.688318       1 main.go:227] handling current node
	I0621 18:53:31.688332       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:31.688338       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:41.701842       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:41.701885       1 main.go:227] handling current node
	I0621 18:53:41.701909       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:41.701915       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:51.716954       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:51.717674       1 main.go:227] handling current node
	I0621 18:53:51.717721       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:51.717779       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:54:01.725293       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:54:01.725480       1 main.go:227] handling current node
	I0621 18:54:01.725509       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:54:01.725528       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:54:11.731578       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:54:11.731619       1 main.go:227] handling current node
	I0621 18:54:11.731630       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:54:11.731635       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:48:29.003941       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0621 18:48:29.003974       1 establishing_controller.go:87] Shutting down EstablishingController
	I0621 18:48:29.004016       1 naming_controller.go:302] Shutting down NamingConditionController
	I0621 18:48:29.004054       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0621 18:48:29.004093       1 controller.go:167] Shutting down OpenAPI controller
	I0621 18:48:29.004170       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0621 18:48:29.004222       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0621 18:48:29.004270       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0621 18:48:29.004356       1 controller.go:129] Ending legacy_token_tracking_controller
	I0621 18:48:29.004425       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0621 18:48:29.004499       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0621 18:48:29.004582       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0621 18:48:29.004661       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0621 18:48:29.005398       1 available_controller.go:439] Shutting down AvailableConditionController
	I0621 18:48:29.005443       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0621 18:48:29.009516       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0621 18:48:29.014355       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0621 18:48:29.017571       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0621 18:48:29.018587       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0621 18:48:29.018611       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0621 18:48:29.018651       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0621 18:48:29.018710       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0621 18:48:29.018731       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0621 18:48:29.022079       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0621 18:48:29.024248       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d] <==
	I0621 18:50:11.388689       1 controller.go:87] Starting OpenAPI V3 controller
	I0621 18:50:11.388786       1 naming_controller.go:291] Starting NamingConditionController
	I0621 18:50:11.388849       1 establishing_controller.go:76] Starting EstablishingController
	I0621 18:50:11.388914       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0621 18:50:11.388976       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0621 18:50:11.389024       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0621 18:50:11.459446       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 18:50:11.461317       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:50:11.461355       1 policy_source.go:224] refreshing policies
	I0621 18:50:11.462236       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0621 18:50:11.462495       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0621 18:50:11.462570       1 shared_informer.go:320] Caches are synced for configmaps
	I0621 18:50:11.462620       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0621 18:50:11.462560       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0621 18:50:11.463762       1 aggregator.go:165] initial CRD sync complete...
	I0621 18:50:11.463819       1 autoregister_controller.go:141] Starting autoregister controller
	I0621 18:50:11.463843       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0621 18:50:11.463901       1 cache.go:39] Caches are synced for autoregister controller
	I0621 18:50:11.464074       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0621 18:50:11.465293       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0621 18:50:11.469748       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0621 18:50:11.553642       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:50:12.365967       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:50:24.661126       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:50:24.756657       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	I0621 18:47:02.153491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.967868ms"
	I0621 18:47:02.153669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.935µs"
	
	
	==> kube-controller-manager [e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204] <==
	I0621 18:50:24.553388       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0621 18:50:24.554288       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0621 18:50:24.556627       1 shared_informer.go:320] Caches are synced for stateful set
	I0621 18:50:24.558593       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0621 18:50:24.567415       1 shared_informer.go:320] Caches are synced for attach detach
	I0621 18:50:24.567453       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 18:50:24.586989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.232569ms"
	I0621 18:50:24.587087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.533µs"
	I0621 18:50:24.602738       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0621 18:50:24.603613       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0621 18:50:24.603724       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0621 18:50:24.603738       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0621 18:50:24.652886       1 shared_informer.go:320] Caches are synced for persistent volume
	I0621 18:50:24.653029       1 shared_informer.go:320] Caches are synced for PV protection
	I0621 18:50:25.040469       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:50:25.040558       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0621 18:50:25.050749       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:50:29.659533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.629839ms"
	I0621 18:50:29.659680       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.553µs"
	I0621 18:50:45.265661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.751µs"
	I0621 18:54:11.005312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.652113ms"
	I0621 18:54:11.005429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.943µs"
	I0621 18:54:11.019923       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.991224ms"
	I0621 18:54:11.020008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.822µs"
	I0621 18:54:11.020186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.287µs"
	
	
	==> kube-proxy [246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52] <==
	I0621 18:50:09.288398       1 server_linux.go:69] "Using iptables proxy"
	E0621 18:50:12.442279       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-406291\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0621 18:50:15.512951       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-406291\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0621 18:50:18.585517       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-406291\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0621 18:50:22.984302       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:50:23.021021       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:50:23.021181       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:50:23.021227       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:50:23.023762       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:50:23.024088       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:50:23.024245       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:50:23.025824       1 config.go:192] "Starting service config controller"
	I0621 18:50:23.025902       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:50:23.025971       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:50:23.025989       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:50:23.026706       1 config.go:319] "Starting node config controller"
	I0621 18:50:23.026831       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:50:23.127003       1 shared_informer.go:320] Caches are synced for node config
	I0621 18:50:23.127050       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:50:23.127115       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374] <==
	I0621 18:50:08.290679       1 serving.go:380] Generated self-signed cert in-memory
	W0621 18:50:11.414815       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0621 18:50:11.414966       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:50:11.415056       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0621 18:50:11.415082       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0621 18:50:11.447211       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0621 18:50:11.448436       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:50:11.456933       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0621 18:50:11.457032       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 18:50:11.457077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0621 18:50:11.460859       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0621 18:50:11.557723       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 18:48:28.987861       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0621 18:48:28.987988       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0621 18:48:28.988601       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 21 18:50:22 ha-406291 kubelet[1367]: I0621 18:50:22.421287    1367 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-406291"
	Jun 21 18:50:22 ha-406291 kubelet[1367]: I0621 18:50:22.432917    1367 scope.go:117] "RemoveContainer" containerID="adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea"
	Jun 21 18:50:22 ha-406291 kubelet[1367]: I0621 18:50:22.434123    1367 scope.go:117] "RemoveContainer" containerID="6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0"
	Jun 21 18:50:24 ha-406291 kubelet[1367]: E0621 18:50:24.491904    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:50:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:50:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:50:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:50:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:51:24 ha-406291 kubelet[1367]: E0621 18:51:24.484207    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:51:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:51:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:51:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:51:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:51:26 ha-406291 kubelet[1367]: I0621 18:51:26.432644    1367 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-406291" podUID="48932727-9ffb-476e-8b2a-ee40959393c5"
	Jun 21 18:51:49 ha-406291 kubelet[1367]: I0621 18:51:49.719495    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-qvl48" podStartSLOduration=1370.151479628 podStartE2EDuration="22m52.719456002s" podCreationTimestamp="2024-06-21 18:28:57 +0000 UTC" firstStartedPulling="2024-06-21 18:28:57.551504492 +0000 UTC m=+93.252502721" lastFinishedPulling="2024-06-21 18:29:00.119480863 +0000 UTC m=+95.820479095" observedRunningTime="2024-06-21 18:29:00.862800003 +0000 UTC m=+96.563798241" watchObservedRunningTime="2024-06-21 18:51:49.719456002 +0000 UTC m=+1465.420454249"
	Jun 21 18:52:24 ha-406291 kubelet[1367]: E0621 18:52:24.483755    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:52:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:52:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:52:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:52:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:53:24 ha-406291 kubelet[1367]: E0621 18:53:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:53:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:53:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:53:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:53:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0621 18:54:18.312402   39185 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19112-8111/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-mt8z9 busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-mt8z9 busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-mt8z9 busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-mt8z9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cr6l7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-cr6l7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  9s    default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	
	
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  4m8s                 default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  3m57s                default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  14m (x3 over 25m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  7m56s (x3 over 13m)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (9.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-406291" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-406291\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-406291\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-406291\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.198\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisi
oner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\"
,\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406291 logs -n 25: (1.495588796s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:39 UTC | 21 Jun 24 18:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- get pods -o          | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-drm4v              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC |                     |
	|         | busybox-fc5497c4f-p2c87              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-406291 -- exec                 | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:40 UTC |
	|         | busybox-fc5497c4f-qvl48 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |           |         |         |                     |                     |
	| node    | add -p ha-406291 -v=7                | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:40 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node stop m02 -v=7         | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC | 21 Jun 24 18:41 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-406291 node start m02 -v=7        | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:41 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-406291 -v=7               | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:46 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-406291 -v=7                    | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:46 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-406291 --wait=true -v=7        | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:48 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-406291                    | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:54 UTC |                     |
	| node    | ha-406291 node delete m03 -v=7       | ha-406291 | jenkins | v1.33.1 | 21 Jun 24 18:54 UTC | 21 Jun 24 18:54 UTC |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 18:48:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 18:48:27.831476   37614 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:48:27.831947   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:48:27.831958   37614 out.go:304] Setting ErrFile to fd 2...
	I0621 18:48:27.831963   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:48:27.832237   37614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:48:27.832938   37614 out.go:298] Setting JSON to false
	I0621 18:48:27.833836   37614 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5406,"bootTime":1718990302,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:48:27.833898   37614 start.go:139] virtualization: kvm guest
	I0621 18:48:27.836380   37614 out.go:177] * [ha-406291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:48:27.837785   37614 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:48:27.837821   37614 notify.go:220] Checking for updates...
	I0621 18:48:27.840567   37614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:48:27.841953   37614 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:48:27.843187   37614 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:48:27.844558   37614 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:48:27.845907   37614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:48:27.847613   37614 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:48:27.847732   37614 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:48:27.848413   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:48:27.848482   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:48:27.863080   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0621 18:48:27.863473   37614 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:48:27.864007   37614 main.go:141] libmachine: Using API Version  1
	I0621 18:48:27.864033   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:48:27.864411   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:48:27.864641   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.900101   37614 out.go:177] * Using the kvm2 driver based on existing profile
	I0621 18:48:27.901277   37614 start.go:297] selected driver: kvm2
	I0621 18:48:27.901299   37614 start.go:901] validating driver "kvm2" against &{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:48:27.901441   37614 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:48:27.901750   37614 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:48:27.901843   37614 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 18:48:27.916614   37614 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 18:48:27.917318   37614 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 18:48:27.917379   37614 cni.go:84] Creating CNI manager for ""
	I0621 18:48:27.917391   37614 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 18:48:27.917453   37614 start.go:340] cluster config:
	{Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:48:27.917576   37614 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 18:48:27.919430   37614 out.go:177] * Starting "ha-406291" primary control-plane node in "ha-406291" cluster
	I0621 18:48:27.920610   37614 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:48:27.920649   37614 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 18:48:27.920659   37614 cache.go:56] Caching tarball of preloaded images
	I0621 18:48:27.920773   37614 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 18:48:27.920787   37614 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 18:48:27.920894   37614 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:48:27.921114   37614 start.go:360] acquireMachinesLock for ha-406291: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 18:48:27.921161   37614 start.go:364] duration metric: took 28.141µs to acquireMachinesLock for "ha-406291"
	I0621 18:48:27.921180   37614 start.go:96] Skipping create...Using existing machine configuration
	I0621 18:48:27.921190   37614 fix.go:54] fixHost starting: 
	I0621 18:48:27.921463   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:48:27.921500   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:48:27.936449   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33963
	I0621 18:48:27.936960   37614 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:48:27.937520   37614 main.go:141] libmachine: Using API Version  1
	I0621 18:48:27.937546   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:48:27.937916   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:48:27.938097   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.938231   37614 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:48:27.939757   37614 fix.go:112] recreateIfNeeded on ha-406291: state=Running err=<nil>
	W0621 18:48:27.939772   37614 fix.go:138] unexpected machine state, will restart: <nil>
	I0621 18:48:27.941724   37614 out.go:177] * Updating the running kvm2 "ha-406291" VM ...
	I0621 18:48:27.942997   37614 machine.go:94] provisionDockerMachine start ...
	I0621 18:48:27.943024   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:48:27.943206   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:27.945749   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:27.946257   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:27.946287   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:27.946456   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:27.946613   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:27.946788   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:27.946925   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:27.947091   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:27.947292   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:27.947307   37614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0621 18:48:28.051086   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:48:28.051116   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.051394   37614 buildroot.go:166] provisioning hostname "ha-406291"
	I0621 18:48:28.051420   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.051618   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.054638   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.055076   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.055099   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.055296   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.055524   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.055672   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.055901   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.056090   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.056290   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.056305   37614 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406291 && echo "ha-406291" | sudo tee /etc/hostname
	I0621 18:48:28.169279   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406291
	
	I0621 18:48:28.169305   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.171914   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.172264   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.172307   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.172459   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.172637   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.172764   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.172937   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.173112   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.173334   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.173358   37614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406291/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 18:48:28.270684   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 18:48:28.270733   37614 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 18:48:28.270776   37614 buildroot.go:174] setting up certificates
	I0621 18:48:28.270798   37614 provision.go:84] configureAuth start
	I0621 18:48:28.270816   37614 main.go:141] libmachine: (ha-406291) Calling .GetMachineName
	I0621 18:48:28.271110   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:48:28.274048   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.274413   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.274440   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.274625   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.276911   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.277237   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.277273   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.277425   37614 provision.go:143] copyHostCerts
	I0621 18:48:28.277474   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:48:28.277514   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 18:48:28.277525   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 18:48:28.277586   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 18:48:28.277681   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:48:28.277699   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 18:48:28.277706   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 18:48:28.277732   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 18:48:28.277852   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:48:28.277874   37614 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 18:48:28.277881   37614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 18:48:28.277908   37614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 18:48:28.277967   37614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.ha-406291 san=[127.0.0.1 192.168.39.198 ha-406291 localhost minikube]
	I0621 18:48:28.770044   37614 provision.go:177] copyRemoteCerts
	I0621 18:48:28.770118   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 18:48:28.770140   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.772531   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.772859   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.772888   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.773061   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.773274   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.773406   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.773544   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:48:28.851817   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 18:48:28.851907   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 18:48:28.875949   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 18:48:28.876034   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0621 18:48:28.899404   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 18:48:28.899479   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0621 18:48:28.922832   37614 provision.go:87] duration metric: took 652.015125ms to configureAuth
	I0621 18:48:28.922865   37614 buildroot.go:189] setting minikube options for container-runtime
	I0621 18:48:28.923083   37614 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:48:28.923147   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:48:28.925724   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.926104   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:48:28.926143   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:48:28.926302   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:48:28.926538   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.926671   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:48:28.926850   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:48:28.926962   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:48:28.927117   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:48:28.927134   37614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 18:49:59.775008   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 18:49:59.775041   37614 machine.go:97] duration metric: took 1m31.832022982s to provisionDockerMachine
	I0621 18:49:59.775056   37614 start.go:293] postStartSetup for "ha-406291" (driver="kvm2")
	I0621 18:49:59.775071   37614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 18:49:59.775090   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:49:59.775469   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 18:49:59.775508   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.778762   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.779252   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.779278   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.779425   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.779621   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.779730   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.779846   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:49:59.861058   37614 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 18:49:59.865212   37614 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 18:49:59.865238   37614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 18:49:59.865306   37614 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 18:49:59.865412   37614 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 18:49:59.865426   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 18:49:59.865530   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 18:49:59.874847   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:49:59.898766   37614 start.go:296] duration metric: took 123.693827ms for postStartSetup
	I0621 18:49:59.898814   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:49:59.899163   37614 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0621 18:49:59.899191   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.902342   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.902758   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.902781   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.902968   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.903148   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.903308   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.903440   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	W0621 18:49:59.980000   37614 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0621 18:49:59.980025   37614 fix.go:56] duration metric: took 1m32.058837235s for fixHost
	I0621 18:49:59.980045   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:49:59.983376   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.983859   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:49:59.983891   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:49:59.984114   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:49:59.984357   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.984534   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:49:59.984719   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:49:59.984900   37614 main.go:141] libmachine: Using SSH client type: native
	I0621 18:49:59.985122   37614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0621 18:49:59.985139   37614 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 18:50:00.091107   37614 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718995800.019349431
	
	I0621 18:50:00.091140   37614 fix.go:216] guest clock: 1718995800.019349431
	I0621 18:50:00.091157   37614 fix.go:229] Guest: 2024-06-21 18:50:00.019349431 +0000 UTC Remote: 2024-06-21 18:49:59.98003189 +0000 UTC m=+92.182726233 (delta=39.317541ms)
	I0621 18:50:00.091202   37614 fix.go:200] guest clock delta is within tolerance: 39.317541ms
	I0621 18:50:00.091209   37614 start.go:83] releasing machines lock for "ha-406291", held for 1m32.170035409s
	I0621 18:50:00.091239   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.091570   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:50:00.094257   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.094684   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.094714   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.094867   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095587   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095720   37614 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:50:00.095777   37614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 18:50:00.095826   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:50:00.095948   37614 ssh_runner.go:195] Run: cat /version.json
	I0621 18:50:00.095969   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:50:00.099018   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099048   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099355   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.099392   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099417   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:00.099546   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:50:00.099547   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:00.099784   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:50:00.099802   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:50:00.099953   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:50:00.099953   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:50:00.100151   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:50:00.100166   37614 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:50:00.100406   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:50:00.221373   37614 ssh_runner.go:195] Run: systemctl --version
	I0621 18:50:00.227389   37614 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 18:50:00.385205   37614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 18:50:00.394152   37614 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 18:50:00.394215   37614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 18:50:00.403823   37614 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0621 18:50:00.403852   37614 start.go:494] detecting cgroup driver to use...
	I0621 18:50:00.403906   37614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 18:50:00.419979   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 18:50:00.434440   37614 docker.go:217] disabling cri-docker service (if available) ...
	I0621 18:50:00.434502   37614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 18:50:00.448314   37614 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 18:50:00.462079   37614 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 18:50:00.614685   37614 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 18:50:00.759729   37614 docker.go:233] disabling docker service ...
	I0621 18:50:00.759808   37614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 18:50:00.777480   37614 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 18:50:00.792874   37614 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 18:50:00.942947   37614 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 18:50:01.096969   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 18:50:01.111115   37614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 18:50:01.175106   37614 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 18:50:01.175190   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.232028   37614 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 18:50:01.232101   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.280475   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.294904   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.316249   37614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 18:50:01.333062   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.348820   37614 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.371299   37614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 18:50:01.389314   37614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 18:50:01.401788   37614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 18:50:01.422679   37614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:50:01.648445   37614 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 18:50:02.047527   37614 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 18:50:02.047604   37614 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 18:50:02.052768   37614 start.go:562] Will wait 60s for crictl version
	I0621 18:50:02.052832   37614 ssh_runner.go:195] Run: which crictl
	I0621 18:50:02.056555   37614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 18:50:02.094299   37614 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 18:50:02.094367   37614 ssh_runner.go:195] Run: crio --version
	I0621 18:50:02.123963   37614 ssh_runner.go:195] Run: crio --version
	I0621 18:50:02.156468   37614 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 18:50:02.158024   37614 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:50:02.161125   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:02.161548   37614 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:50:02.161570   37614 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:50:02.161875   37614 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 18:50:02.167481   37614 kubeadm.go:877] updating cluster {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 18:50:02.167692   37614 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 18:50:02.167755   37614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:50:02.219832   37614 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:50:02.219854   37614 crio.go:433] Images already preloaded, skipping extraction
	I0621 18:50:02.219899   37614 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 18:50:02.255684   37614 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 18:50:02.255710   37614 cache_images.go:84] Images are preloaded, skipping loading
	I0621 18:50:02.255720   37614 kubeadm.go:928] updating node { 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0621 18:50:02.255840   37614 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 18:50:02.255924   37614 ssh_runner.go:195] Run: crio config
	I0621 18:50:02.317976   37614 cni.go:84] Creating CNI manager for ""
	I0621 18:50:02.317997   37614 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 18:50:02.318008   37614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 18:50:02.318027   37614 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406291 NodeName:ha-406291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 18:50:02.318155   37614 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 18:50:02.318171   37614 kube-vip.go:115] generating kube-vip config ...
	I0621 18:50:02.318209   37614 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0621 18:50:02.331312   37614 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0621 18:50:02.331435   37614 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0621 18:50:02.331501   37614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 18:50:02.342410   37614 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 18:50:02.342501   37614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0621 18:50:02.353833   37614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0621 18:50:02.372067   37614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 18:50:02.391049   37614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 18:50:02.409310   37614 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0621 18:50:02.427547   37614 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0621 18:50:02.433079   37614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 18:50:02.582453   37614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 18:50:02.598236   37614 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291 for IP: 192.168.39.198
	I0621 18:50:02.598258   37614 certs.go:194] generating shared ca certs ...
	I0621 18:50:02.598278   37614 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.598473   37614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 18:50:02.598527   37614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 18:50:02.598538   37614 certs.go:256] generating profile certs ...
	I0621 18:50:02.598630   37614 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/client.key
	I0621 18:50:02.598657   37614 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995
	I0621 18:50:02.598668   37614 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198 192.168.39.89 192.168.39.254]
	I0621 18:50:02.663764   37614 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 ...
	I0621 18:50:02.663805   37614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995: {Name:mk333c8edf0e5497704ceac44948ed6d5eae057c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.664011   37614 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995 ...
	I0621 18:50:02.664028   37614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995: {Name:mk5cd7253a5d75c3e8a117ab1180e6cf66770645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 18:50:02.664122   37614 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt.9def4995 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt
	I0621 18:50:02.664288   37614 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key.9def4995 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key
	I0621 18:50:02.664452   37614 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key
	I0621 18:50:02.664473   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 18:50:02.664492   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 18:50:02.664510   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 18:50:02.664528   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 18:50:02.664544   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 18:50:02.664558   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 18:50:02.664575   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 18:50:02.664593   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 18:50:02.664653   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 18:50:02.664692   37614 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 18:50:02.664704   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 18:50:02.664743   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 18:50:02.664779   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 18:50:02.664808   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 18:50:02.664862   37614 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 18:50:02.664896   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:02.664913   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 18:50:02.664932   37614 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 18:50:02.665576   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 18:50:02.694113   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 18:50:02.722523   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 18:50:02.749537   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 18:50:02.776614   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0621 18:50:02.805311   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 18:50:02.832592   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 18:50:02.857479   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 18:50:02.881711   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 18:50:02.907387   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 18:50:02.934334   37614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 18:50:02.959508   37614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 18:50:02.977465   37614 ssh_runner.go:195] Run: openssl version
	I0621 18:50:02.983767   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 18:50:02.995314   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.001937   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.002002   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 18:50:03.009327   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 18:50:03.022240   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 18:50:03.037533   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.042517   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.042581   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 18:50:03.048576   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 18:50:03.059273   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 18:50:03.071497   37614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.076360   37614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.076413   37614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 18:50:03.082259   37614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 18:50:03.092484   37614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 18:50:03.097277   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0621 18:50:03.103376   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0621 18:50:03.109351   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0621 18:50:03.115157   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0621 18:50:03.120911   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0621 18:50:03.126507   37614 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0621 18:50:03.132154   37614 kubeadm.go:391] StartCluster: {Name:ha-406291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-406291 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.193 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:50:03.132279   37614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 18:50:03.132331   37614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 18:50:03.170290   37614 cri.go:89] found id: "6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0"
	I0621 18:50:03.170317   37614 cri.go:89] found id: "adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea"
	I0621 18:50:03.170320   37614 cri.go:89] found id: "6d732e2622f11e5a01de01fc8103ee96383981edc2d6e18b40f0d42178986a25"
	I0621 18:50:03.170323   37614 cri.go:89] found id: "6088ccc5ec4be753f7a30542686c05bbcc3444300a99daa40b0bb5bd7ea37c3c"
	I0621 18:50:03.170326   37614 cri.go:89] found id: "9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b"
	I0621 18:50:03.170329   37614 cri.go:89] found id: "468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d"
	I0621 18:50:03.170331   37614 cri.go:89] found id: "e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547"
	I0621 18:50:03.170334   37614 cri.go:89] found id: "96a229fabb5aa95dea40a5ecf086bd5fb8e221098bc541613e955733ebb84631"
	I0621 18:50:03.170336   37614 cri.go:89] found id: "a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d"
	I0621 18:50:03.170341   37614 cri.go:89] found id: "89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8"
	I0621 18:50:03.170344   37614 cri.go:89] found id: "2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3"
	I0621 18:50:03.170346   37614 cri.go:89] found id: "3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31"
	I0621 18:50:03.170349   37614 cri.go:89] found id: ""
	I0621 18:50:03.170399   37614 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 21 18:54:20 ha-406291 crio[4830]: time="2024-06-21 18:54:20.987057881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996060987036263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cbf2001-ef00-4655-9b6a-18d7632a476f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:20 ha-406291 crio[4830]: time="2024-06-21 18:54:20.987764358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a460283a-2b3b-4322-bf67-255ddb1ac09f name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:20 ha-406291 crio[4830]: time="2024-06-21 18:54:20.987819486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a460283a-2b3b-4322-bf67-255ddb1ac09f name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:20 ha-406291 crio[4830]: time="2024-06-21 18:54:20.988492233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a460283a-2b3b-4322-bf67-255ddb1ac09f name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.035615260Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0737d5f2-a133-441f-8aea-cf26f1130c1b name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.035707253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0737d5f2-a133-441f-8aea-cf26f1130c1b name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.036814525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b39804c-2787-41de-ae1b-145cd57a1ede name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.037288128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996061037262524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b39804c-2787-41de-ae1b-145cd57a1ede name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.038200291Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ee8b6ba-4fe2-4860-91fa-1805b29a1630 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.038293292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ee8b6ba-4fe2-4860-91fa-1805b29a1630 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.038734823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ee8b6ba-4fe2-4860-91fa-1805b29a1630 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.078511527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc96076d-cdf9-49ea-af10-388bd5f9bf38 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.078681515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc96076d-cdf9-49ea-af10-388bd5f9bf38 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.081000058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0b9b76d-c763-4266-9f2e-7b0bca890acf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.081523054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996061081495823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0b9b76d-c763-4266-9f2e-7b0bca890acf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.082026612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51f3c672-baf1-4e2f-b1e8-961fd1c72096 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.082091315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51f3c672-baf1-4e2f-b1e8-961fd1c72096 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.082535924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51f3c672-baf1-4e2f-b1e8-961fd1c72096 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.123752830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=475e0406-e7cd-4cc4-b395-e1b79a494636 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.123844676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=475e0406-e7cd-4cc4-b395-e1b79a494636 name=/runtime.v1.RuntimeService/Version
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.125008916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7be2c812-6352-487f-a54d-ca537cd9e06d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.125497022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996061125470487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144981,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7be2c812-6352-487f-a54d-ca537cd9e06d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.126267842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3253300d-c269-4944-8f27-632b2db32d9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.126332519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3253300d-c269-4944-8f27-632b2db32d9c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 18:54:21 ha-406291 crio[4830]: time="2024-06-21 18:54:21.126754656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09a2e3d098856f2200e39c92669f6f175a32d42297a9a3d5c291978d1f8d0d74,PodSandboxId:231b7531a974b4fa1168f271b37ea5cf33df2e5ab59ea67d46149f9a8197404b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718995840721463906,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716,PodSandboxId:908bde46281af414c0075aabce7890dfa087f381a3ef9a5b0651ab520cdb8435,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822483073221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annotations:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"con
tainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c,PodSandboxId:e10e95f5f35c01c0eb2ad3a0a49910bd49cf827b26c09a78b7dd3d2faa15fe55,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718995822456885612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c
679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35ca450b8450a611e9ad835bbf3d408c728e7e7d1fbf258c8f249d80bcf038f,PodSandboxId:8fec4c6e62141364888e488aa814c1f06b60e58be5c4bb875b6e1eb5ffc4a250,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718995821779424178,Labels:map[st
ring]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 369c576788ec675acc0ff507ad4caf20,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52,PodSandboxId:047b75f8fe402d3c3c7fcc65fc18c56ffec45e20f3f1a452338a41433d34e078,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718995807698855971,Labels:map[string]string{io.kubernetes.container.n
ame: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f,PodSandboxId:4a9342a5a2eeb43140514126f52d0c9fd38f727529c857e0891c8bf2d31c4a8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718995807806583037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.p
od.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498,PodSandboxId:535a7ff15105f569395c6cf7f02fefc79c194a97e051fa5af9412f15bd20af54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718995807504571464,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d,PodSandboxId:bca8e9a757e1c46d1ca2cedba74336bb99f1b505f861e6ca80ae9d5053f4ed3d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718995807469500725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59d0df4fcf162ec60f5d928ad001ff6a374887d38c9f6791aab5c706f47c791,PodSandboxId:4e2453ce7944062b3c2f93ec84b80a2b6493725c3f52899047ed966b2d36fd6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718995807408632939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204,PodSandboxId:84fbafaf5a0bea8e4df39e98942eb41300c5281d1b6217f02587c6fa3fbd2b34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718995807315798233,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374,PodSandboxId:b77046a9f35081deae7f5de5700954014cb07d84dbad8bcca2e9ad955a3e015a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718995807128041977,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b0
97b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0,PodSandboxId:ef224dee216468e736bbfc8457b6d7542c385548fcb0666c2ff7fa52d43b1156,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801444255575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7ng4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724701c-6f0e-45ed-8fc7-70245d4fa569,},Annota
tions:map[string]string{io.kubernetes.container.hash: e9dc2233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea,PodSandboxId:3d95d41781333e360e7471bd45a44f887d5365c40348dafee3d31ac6130d068b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718995801432250413,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nx5xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375157ef-5af0-41b9-8ed9-162e5a88c679,},Annotations:map[string]string{io.kubernetes.container.hash: 611f455d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252cb2f279857b80cfc6c701089f41991129c04b70abeb846b30882e2c665408,PodSandboxId:cd0fd4f6a3d6cd084d2f45842c8b800d5e90493d4ee1c849abc768254d7c6531,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5
b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718994540131805136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qvl48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 59f123aa-60d0-4d29-b58e-cb9a43c26895,},Annotations:map[string]string{io.kubernetes.container.hash: a73416c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ad7353127926e3c79ac7b2068cd6d5b94beefb6c266ccac1b3b567113024b,PodSandboxId:ab6a16146209c5cb5382869ac23a5b1456a089779d4f9301d3e0fade484313e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718994459852946952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a39ae0-87ac-492a-a711-290e61bb895e,},Annotations:map[string]string{io.kubernetes.container.hash: a13b39bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d,PodSandboxId:956df8749e8db350cdcc534087f3bb7a212c6c1f51d1bebed27aa09a6dd443dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718994458069993945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e921d86f-0ac3-413e-9e85-e809139ca210,},Annotations:map[string]string{io.kubernetes.container.hash: af35f4f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547,PodSandboxId:ab9fd8c2e0094b5d6ce1c56611c8348bf3599083d6753208e1cd8d061915718f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718994457887549344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xnbqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11acb4f0-c5e7-4ec5-9d5e-3f470b9d5073,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa78979,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d,PodSandboxId:7cae0fc993f3aa93f18dad7bcd353300f3d92cfd00fe954be039f37ab9945d4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0
d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718994438148586283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81efe8b097b0aaeaaac87f9a6e2dfe3b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8,PodSandboxId:afce4542ea7ca97dbc94a8c737e508240bc331708d52d0f5801605c58d16744e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a
899,State:CONTAINER_EXITED,CreatedAt:1718994438095721977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28eb1f9a7974972f95837a71475ffe97,},Annotations:map[string]string{io.kubernetes.container.hash: 215bce33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3,PodSandboxId:9552de7a0cb739fa78a45784d863f051a1c1cfcec5c2987dd50bdc33fee99320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:17189
94438069880812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2d2e5dadb6d48084ee46b3119245c5,},Annotations:map[string]string{io.kubernetes.container.hash: a9ba7dea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31,PodSandboxId:2b8837f8e36da673b833225d75047e1a783e42de659e1ca0f1595eba13f2a075,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:171899443800395583
8,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd582f38b9812a77200f468c3cf9c0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3253300d-c269-4944-8f27-632b2db32d9c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	09a2e3d098856       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   231b7531a974b       busybox-fc5497c4f-qvl48
	3eb10cac6d1c3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   908bde46281af       coredns-7db6d8ff4d-7ng4v
	c869f01d25b20       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   e10e95f5f35c0       coredns-7db6d8ff4d-nx5xs
	e35ca450b8450       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      3 minutes ago       Running             kube-vip                  0                   8fec4c6e62141       kube-vip-ha-406291
	e41ffe84b8dea       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      4 minutes ago       Running             kindnet-cni               1                   4a9342a5a2eeb       kindnet-vnds7
	246b5b36ac09f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      4 minutes ago       Running             kube-proxy                1                   047b75f8fe402       kube-proxy-xnbqj
	e8dcbcf864ab9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   535a7ff15105f       etcd-ha-406291
	6ce53eeeec0f2       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            1                   bca8e9a757e1c       kube-apiserver-ha-406291
	d59d0df4fcf16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   4e2453ce79440       storage-provisioner
	e9c120a578b20       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   1                   84fbafaf5a0be       kube-controller-manager-ha-406291
	6f2e61853ab78       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      4 minutes ago       Running             kube-scheduler            1                   b77046a9f3508       kube-scheduler-ha-406291
	6bba601718e97       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   ef224dee21646       coredns-7db6d8ff4d-7ng4v
	adf7b4a3e9492       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   3d95d41781333       coredns-7db6d8ff4d-nx5xs
	252cb2f279857       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   25 minutes ago      Exited              busybox                   0                   cd0fd4f6a3d6c       busybox-fc5497c4f-qvl48
	9d0ad73531279       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      26 minutes ago      Exited              storage-provisioner       0                   ab6a16146209c       storage-provisioner
	468b13f5a8054       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      26 minutes ago      Exited              kindnet-cni               0                   956df8749e8db       kindnet-vnds7
	e41f8891c5177       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      26 minutes ago      Exited              kube-proxy                0                   ab9fd8c2e0094       kube-proxy-xnbqj
	a143e6000662a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      27 minutes ago      Exited              kube-scheduler            0                   7cae0fc993f3a       kube-scheduler-ha-406291
	89b399d67fa40       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      27 minutes ago      Exited              etcd                      0                   afce4542ea7ca       etcd-ha-406291
	2d71c6ae5cee5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      27 minutes ago      Exited              kube-apiserver            0                   9552de7a0cb73       kube-apiserver-ha-406291
	3fbe446b39e8d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      27 minutes ago      Exited              kube-controller-manager   0                   2b8837f8e36da       kube-controller-manager-ha-406291
	
	
	==> coredns [3eb10cac6d1c3e97a71930fb9a7f4b79dce5391ffc03f1ea516374c17821d716] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54228 - 26713 "HINFO IN 4548532589898165947.6437560420477737975. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010774765s
	
	
	==> coredns [6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0] <==
	
	
	==> coredns [adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea] <==
	
	
	==> coredns [c869f01d25b200b4c3df8e084f4eff83bea86cbd7c409e04f0a85157042dec2c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32776 - 50363 "HINFO IN 2533289171171185985.5104556903785863448. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020452492s
	
	
	==> describe nodes <==
	Name:               ha-406291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=ha-406291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T18_27_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 18:27:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406291
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 18:54:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 18:50:22 +0000   Fri, 21 Jun 2024 18:27:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-406291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b5f2f4e64d426eb3a71e7a23c0cea5
	  System UUID:                10b5f2f4-e64d-426e-b3a7-1e7a23c0cea5
	  Boot ID:                    10778ad9-ed13-4749-a084-25b2b2bfde76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qvl48              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 coredns-7db6d8ff4d-7ng4v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-nx5xs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-406291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-vnds7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-406291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-ha-406291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-xnbqj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-406291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-vip-ha-406291                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 26m    kube-proxy       
	  Normal   Starting                 3m58s  kube-proxy       
	  Normal   Starting                 26m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  26m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26m    kubelet          Node ha-406291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26m    kubelet          Node ha-406291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26m    kubelet          Node ha-406291 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26m    node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	  Normal   NodeReady                26m    kubelet          Node ha-406291 status is now: NodeReady
	  Warning  ContainerGCFailed        4m57s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m57s  node-controller  Node ha-406291 event: Registered Node ha-406291 in Controller
	
	
	==> dmesg <==
	[  +4.855560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun21 18:27] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056681] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.167604] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.147792] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253886] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.905184] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.549385] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.060073] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.066237] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.078680] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.552032] kauditd_printk_skb: 21 callbacks suppressed
	[Jun21 18:28] kauditd_printk_skb: 74 callbacks suppressed
	[Jun21 18:50] systemd-fstab-generator[4547]: Ignoring "noauto" option for root device
	[  +0.147300] systemd-fstab-generator[4559]: Ignoring "noauto" option for root device
	[  +0.179225] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	[  +0.153967] systemd-fstab-generator[4585]: Ignoring "noauto" option for root device
	[  +0.498288] systemd-fstab-generator[4740]: Ignoring "noauto" option for root device
	[  +0.987159] systemd-fstab-generator[4965]: Ignoring "noauto" option for root device
	[  +4.443961] kauditd_printk_skb: 142 callbacks suppressed
	[ +14.867731] kauditd_printk_skb: 86 callbacks suppressed
	[  +7.940594] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [89b399d67fa40e16a03cabb28dca7a07826900a21f9e90b9b9b97676b58e79f8] <==
	{"level":"info","ts":"2024-06-21T18:27:37.357719Z","caller":"traceutil/trace.go:171","msg":"trace[571743030] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"105.067279ms","start":"2024-06-21T18:27:37.252598Z","end":"2024-06-21T18:27:37.357665Z","steps":["trace[571743030] 'process raft request'  (duration: 48.775466ms)","trace[571743030] 'compare'  (duration: 56.093787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T18:28:12.689426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.176174ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11593268453381319053 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:496 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:369 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T18:28:12.689586Z","caller":"traceutil/trace.go:171","msg":"trace[939483523] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"172.541349ms","start":"2024-06-21T18:28:12.517021Z","end":"2024-06-21T18:28:12.689563Z","steps":["trace[939483523] 'process raft request'  (duration: 46.605278ms)","trace[939483523] 'compare'  (duration: 124.988397ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T18:37:19.55118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-06-21T18:37:19.562898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"11.353931ms","hash":518064132,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2441216,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-06-21T18:37:19.562955Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":518064132,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-06-21T18:41:01.46327Z","caller":"traceutil/trace.go:171","msg":"trace[373022302] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"202.232692ms","start":"2024-06-21T18:41:01.260997Z","end":"2024-06-21T18:41:01.46323Z","steps":["trace[373022302] 'process raft request'  (duration: 201.291371ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:41:01.463374Z","caller":"traceutil/trace.go:171","msg":"trace[1787973675] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"177.381269ms","start":"2024-06-21T18:41:01.285981Z","end":"2024-06-21T18:41:01.463362Z","steps":["trace[1787973675] 'process raft request'  (duration: 177.120594ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T18:42:19.558621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1509}
	{"level":"info","ts":"2024-06-21T18:42:19.563203Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1509,"took":"4.232264ms","hash":4134822789,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2011136,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-06-21T18:42:19.563247Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4134822789,"revision":1509,"compact-revision":969}
	{"level":"info","ts":"2024-06-21T18:47:19.567745Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2121}
	{"level":"info","ts":"2024-06-21T18:47:19.578898Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2121,"took":"9.848541ms","hash":4103272021,"current-db-size-bytes":2441216,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2158592,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-21T18:47:19.579002Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4103272021,"revision":2121,"compact-revision":1509}
	{"level":"info","ts":"2024-06-21T18:48:28.996649Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-21T18:48:28.997685Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-406291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"]}
	{"level":"warn","ts":"2024-06-21T18:48:28.997914Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/06/21 18:48:28 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-21T18:48:29.019664Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T18:48:29.07084Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.198:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T18:48:29.070996Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.198:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-21T18:48:29.071071Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f1d2ab5330a2a0e3","current-leader-member-id":"f1d2ab5330a2a0e3"}
	{"level":"info","ts":"2024-06-21T18:48:29.073709Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:48:29.073927Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:48:29.073993Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-406291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"]}
	
	
	==> etcd [e8dcbcf864ab99955feff994f6bcd539edc4380e9bffd7cd534dd967c7bad498] <==
	{"level":"info","ts":"2024-06-21T18:50:08.468075Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T18:50:08.468105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T18:50:08.501093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 switched to configuration voters=(17425178282036469987)"}
	{"level":"info","ts":"2024-06-21T18:50:08.50936Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","added-peer-id":"f1d2ab5330a2a0e3","added-peer-peer-urls":["https://192.168.39.198:2380"]}
	{"level":"info","ts":"2024-06-21T18:50:08.509531Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9fb372ad12afeb1b","local-member-id":"f1d2ab5330a2a0e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:50:08.509572Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T18:50:08.501761Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-21T18:50:08.529317Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f1d2ab5330a2a0e3","initial-advertise-peer-urls":["https://192.168.39.198:2380"],"listen-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.198:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-21T18:50:08.529422Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-21T18:50:08.501793Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:50:08.529674Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2024-06-21T18:50:10.027082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-21T18:50:10.02726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-21T18:50:10.027346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgPreVoteResp from f1d2ab5330a2a0e3 at term 2"}
	{"level":"info","ts":"2024-06-21T18:50:10.027392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became candidate at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.027417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.027444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f1d2ab5330a2a0e3 became leader at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.027474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 3"}
	{"level":"info","ts":"2024-06-21T18:50:10.029196Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:ha-406291 ClientURLs:[https://192.168.39.198:2379]}","request-path":"/0/members/f1d2ab5330a2a0e3/attributes","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T18:50:10.029242Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:50:10.02933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T18:50:10.02982Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T18:50:10.029851Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T18:50:10.031528Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T18:50:10.031596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	
	
	==> kernel <==
	 18:54:21 up 27 min,  0 users,  load average: 0.47, 0.38, 0.26
	Linux ha-406291 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [468b13f5a8054a45b113ccc4b53701029f1d0b42ffdac760ce2de5642cce055d] <==
	I0621 18:47:19.889708       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:29.896242       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:29.896507       1 main.go:227] handling current node
	I0621 18:47:29.896581       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:29.896607       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:39.900437       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:39.900471       1 main.go:227] handling current node
	I0621 18:47:39.900481       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:39.900486       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:49.910179       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:49.910364       1 main.go:227] handling current node
	I0621 18:47:49.910412       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:49.910433       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:47:59.920904       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:47:59.921055       1 main.go:227] handling current node
	I0621 18:47:59.921083       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:47:59.921104       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:48:09.925491       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:48:09.925574       1 main.go:227] handling current node
	I0621 18:48:09.925596       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:48:09.925612       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:48:19.931901       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:48:19.931924       1 main.go:227] handling current node
	I0621 18:48:19.931934       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:48:19.931948       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e41ffe84b8dea76129f1fa5d5726f6cf43e1409a959998ebe3a3fc56d8699d7f] <==
	I0621 18:53:11.677083       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:21.681340       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:21.681491       1 main.go:227] handling current node
	I0621 18:53:21.681517       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:21.681535       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:31.688278       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:31.688318       1 main.go:227] handling current node
	I0621 18:53:31.688332       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:31.688338       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:41.701842       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:41.701885       1 main.go:227] handling current node
	I0621 18:53:41.701909       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:41.701915       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:53:51.716954       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:53:51.717674       1 main.go:227] handling current node
	I0621 18:53:51.717721       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:53:51.717779       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:54:01.725293       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:54:01.725480       1 main.go:227] handling current node
	I0621 18:54:01.725509       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:54:01.725528       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	I0621 18:54:11.731578       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0621 18:54:11.731619       1 main.go:227] handling current node
	I0621 18:54:11.731630       1 main.go:223] Handling node with IPs: map[192.168.39.193:{}]
	I0621 18:54:11.731635       1 main.go:250] Node ha-406291-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2d71c6ae5cee5f15a281850849c500184f8adb3ab533c12e4f88c9c4139ca6b3] <==
	I0621 18:48:29.003941       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0621 18:48:29.003974       1 establishing_controller.go:87] Shutting down EstablishingController
	I0621 18:48:29.004016       1 naming_controller.go:302] Shutting down NamingConditionController
	I0621 18:48:29.004054       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0621 18:48:29.004093       1 controller.go:167] Shutting down OpenAPI controller
	I0621 18:48:29.004170       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0621 18:48:29.004222       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0621 18:48:29.004270       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0621 18:48:29.004356       1 controller.go:129] Ending legacy_token_tracking_controller
	I0621 18:48:29.004425       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0621 18:48:29.004499       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0621 18:48:29.004582       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0621 18:48:29.004661       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0621 18:48:29.005398       1 available_controller.go:439] Shutting down AvailableConditionController
	I0621 18:48:29.005443       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0621 18:48:29.009516       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0621 18:48:29.014355       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0621 18:48:29.017571       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0621 18:48:29.018587       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0621 18:48:29.018611       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0621 18:48:29.018651       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0621 18:48:29.018710       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0621 18:48:29.018731       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0621 18:48:29.022079       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0621 18:48:29.024248       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6ce53eeeec0f21c6681925b7c5e72b8595ab65de8b0d0b768da43f7f434af72d] <==
	I0621 18:50:11.388689       1 controller.go:87] Starting OpenAPI V3 controller
	I0621 18:50:11.388786       1 naming_controller.go:291] Starting NamingConditionController
	I0621 18:50:11.388849       1 establishing_controller.go:76] Starting EstablishingController
	I0621 18:50:11.388914       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0621 18:50:11.388976       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0621 18:50:11.389024       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0621 18:50:11.459446       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 18:50:11.461317       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 18:50:11.461355       1 policy_source.go:224] refreshing policies
	I0621 18:50:11.462236       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0621 18:50:11.462495       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0621 18:50:11.462570       1 shared_informer.go:320] Caches are synced for configmaps
	I0621 18:50:11.462620       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0621 18:50:11.462560       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0621 18:50:11.463762       1 aggregator.go:165] initial CRD sync complete...
	I0621 18:50:11.463819       1 autoregister_controller.go:141] Starting autoregister controller
	I0621 18:50:11.463843       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0621 18:50:11.463901       1 cache.go:39] Caches are synced for autoregister controller
	I0621 18:50:11.464074       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0621 18:50:11.465293       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0621 18:50:11.469748       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0621 18:50:11.553642       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 18:50:12.365967       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 18:50:24.661126       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 18:50:24.756657       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3fbe446b39e8d30d0239ea55bcafc834021c44bf94d6c5a9d183fcce5cd16a31] <==
	I0621 18:27:39.330983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.725µs"
	I0621 18:27:39.352409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.246µs"
	I0621 18:27:39.366116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.163µs"
	I0621 18:27:40.575618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.679µs"
	I0621 18:27:40.612176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.937752ms"
	I0621 18:27:40.612598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.232µs"
	I0621 18:27:40.634931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.444693ms"
	I0621 18:27:40.635035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.847µs"
	I0621 18:27:41.885215       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0621 18:28:57.137627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.563277ms"
	I0621 18:28:57.164070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375749ms"
	I0621 18:28:57.164194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.743µs"
	I0621 18:29:00.876863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.452577ms"
	I0621 18:29:00.877083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.932µs"
	I0621 18:41:01.468373       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406291-m03\" does not exist"
	I0621 18:41:01.505245       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406291-m03" podCIDRs=["10.244.1.0/24"]
	I0621 18:41:02.015312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406291-m03"
	I0621 18:41:10.879504       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406291-m03"
	I0621 18:41:10.905675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.95µs"
	I0621 18:41:10.905996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.91µs"
	I0621 18:41:10.921286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0621 18:41:14.431187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.902838ms"
	I0621 18:41:14.431268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.911µs"
	I0621 18:47:02.153491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.967868ms"
	I0621 18:47:02.153669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.935µs"
	
	
	==> kube-controller-manager [e9c120a578b20e1b617a5b93202c07c27c30de5bfc4580b4c826235b3afc8204] <==
	I0621 18:50:24.553388       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0621 18:50:24.554288       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0621 18:50:24.556627       1 shared_informer.go:320] Caches are synced for stateful set
	I0621 18:50:24.558593       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0621 18:50:24.567415       1 shared_informer.go:320] Caches are synced for attach detach
	I0621 18:50:24.567453       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 18:50:24.586989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.232569ms"
	I0621 18:50:24.587087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.533µs"
	I0621 18:50:24.602738       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0621 18:50:24.603613       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0621 18:50:24.603724       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0621 18:50:24.603738       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0621 18:50:24.652886       1 shared_informer.go:320] Caches are synced for persistent volume
	I0621 18:50:24.653029       1 shared_informer.go:320] Caches are synced for PV protection
	I0621 18:50:25.040469       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:50:25.040558       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0621 18:50:25.050749       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 18:50:29.659533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.629839ms"
	I0621 18:50:29.659680       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.553µs"
	I0621 18:50:45.265661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.751µs"
	I0621 18:54:11.005312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.652113ms"
	I0621 18:54:11.005429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.943µs"
	I0621 18:54:11.019923       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.991224ms"
	I0621 18:54:11.020008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.822µs"
	I0621 18:54:11.020186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.287µs"
	
	
	==> kube-proxy [246b5b36ac09f427c065ee257a5df705d3a4d6bb3c0bce5b8322f7d64496dc52] <==
	I0621 18:50:09.288398       1 server_linux.go:69] "Using iptables proxy"
	E0621 18:50:12.442279       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-406291\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0621 18:50:15.512951       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-406291\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0621 18:50:18.585517       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-406291\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0621 18:50:22.984302       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:50:23.021021       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:50:23.021181       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:50:23.021227       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:50:23.023762       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:50:23.024088       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:50:23.024245       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:50:23.025824       1 config.go:192] "Starting service config controller"
	I0621 18:50:23.025902       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:50:23.025971       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:50:23.025989       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:50:23.026706       1 config.go:319] "Starting node config controller"
	I0621 18:50:23.026831       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:50:23.127003       1 shared_informer.go:320] Caches are synced for node config
	I0621 18:50:23.127050       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:50:23.127115       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e41f8891c51779bf0c1b5871299816d7810f90994a6c83d827d63e437b61d547] <==
	I0621 18:27:38.126736       1 server_linux.go:69] "Using iptables proxy"
	I0621 18:27:38.143236       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.198"]
	I0621 18:27:38.177576       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 18:27:38.177626       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 18:27:38.177644       1 server_linux.go:165] "Using iptables Proxier"
	I0621 18:27:38.180797       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 18:27:38.181002       1 server.go:872] "Version info" version="v1.30.2"
	I0621 18:27:38.181026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:27:38.182882       1 config.go:192] "Starting service config controller"
	I0621 18:27:38.183195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 18:27:38.183262       1 config.go:101] "Starting endpoint slice config controller"
	I0621 18:27:38.183278       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 18:27:38.184787       1 config.go:319] "Starting node config controller"
	I0621 18:27:38.184819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 18:27:38.283818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 18:27:38.283839       1 shared_informer.go:320] Caches are synced for service config
	I0621 18:27:38.285303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f2e61853ab788fb7b5222dedf458d7085852d9caf32cf492e3bce968e130374] <==
	I0621 18:50:08.290679       1 serving.go:380] Generated self-signed cert in-memory
	W0621 18:50:11.414815       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0621 18:50:11.414966       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:50:11.415056       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0621 18:50:11.415082       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0621 18:50:11.447211       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0621 18:50:11.448436       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 18:50:11.456933       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0621 18:50:11.457032       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 18:50:11.457077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0621 18:50:11.460859       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0621 18:50:11.557723       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a143e6000662ad186e45d6f035abc485373adbc71e6aa228c57cf9ec40199d3d] <==
	E0621 18:27:21.176992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:21.177025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 18:27:21.177088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 18:27:21.177120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 18:27:21.177197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:21.177204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:21.177266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0621 18:27:22.041765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.041824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.144830       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 18:27:22.144881       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 18:27:22.217224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 18:27:22.217266       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 18:27:22.256407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0621 18:27:22.256450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0621 18:27:22.361486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0621 18:27:22.361536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0621 18:27:22.366073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 18:27:22.366190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 18:27:25.267361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 18:48:28.987861       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0621 18:48:28.987988       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0621 18:48:28.988601       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 21 18:50:22 ha-406291 kubelet[1367]: I0621 18:50:22.421287    1367 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-406291"
	Jun 21 18:50:22 ha-406291 kubelet[1367]: I0621 18:50:22.432917    1367 scope.go:117] "RemoveContainer" containerID="adf7b4a3e9492eae203fe2ae963d6b1b131c8c6c809259fcf8ee94872bdf0bea"
	Jun 21 18:50:22 ha-406291 kubelet[1367]: I0621 18:50:22.434123    1367 scope.go:117] "RemoveContainer" containerID="6bba601718e9734309428daa119e2e5d6e129b3436277dc5011fa708f21b8de0"
	Jun 21 18:50:24 ha-406291 kubelet[1367]: E0621 18:50:24.491904    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:50:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:50:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:50:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:50:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:51:24 ha-406291 kubelet[1367]: E0621 18:51:24.484207    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:51:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:51:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:51:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:51:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:51:26 ha-406291 kubelet[1367]: I0621 18:51:26.432644    1367 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-406291" podUID="48932727-9ffb-476e-8b2a-ee40959393c5"
	Jun 21 18:51:49 ha-406291 kubelet[1367]: I0621 18:51:49.719495    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-qvl48" podStartSLOduration=1370.151479628 podStartE2EDuration="22m52.719456002s" podCreationTimestamp="2024-06-21 18:28:57 +0000 UTC" firstStartedPulling="2024-06-21 18:28:57.551504492 +0000 UTC m=+93.252502721" lastFinishedPulling="2024-06-21 18:29:00.119480863 +0000 UTC m=+95.820479095" observedRunningTime="2024-06-21 18:29:00.862800003 +0000 UTC m=+96.563798241" watchObservedRunningTime="2024-06-21 18:51:49.719456002 +0000 UTC m=+1465.420454249"
	Jun 21 18:52:24 ha-406291 kubelet[1367]: E0621 18:52:24.483755    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:52:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:52:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:52:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:52:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 18:53:24 ha-406291 kubelet[1367]: E0621 18:53:24.483552    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 18:53:24 ha-406291 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 18:53:24 ha-406291 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 18:53:24 ha-406291 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 18:53:24 ha-406291 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0621 18:54:20.729163   39336 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19112-8111/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406291 -n ha-406291
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-mt8z9 busybox-fc5497c4f-p2c87
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406291 describe pod busybox-fc5497c4f-mt8z9 busybox-fc5497c4f-p2c87
helpers_test.go:282: (dbg) kubectl --context ha-406291 describe pod busybox-fc5497c4f-mt8z9 busybox-fc5497c4f-p2c87:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-mt8z9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cr6l7 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-cr6l7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  11s   default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	
	
	Name:             busybox-fc5497c4f-p2c87
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8tzk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q8tzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  4m11s                default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  4m                   default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  14m (x3 over 25m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  7m58s (x3 over 13m)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (143.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 stop -v=7 --alsologtostderr
E0621 18:55:54.861969   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 stop -v=7 --alsologtostderr: exit status 82 (2m1.68103145s)

                                                
                                                
-- stdout --
	* Stopping node "ha-406291-m02"  ...
	* Stopping node "ha-406291"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:54:22.616213   39426 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:54:22.616416   39426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:54:22.616424   39426 out.go:304] Setting ErrFile to fd 2...
	I0621 18:54:22.616428   39426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:54:22.616595   39426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:54:22.616797   39426 out.go:298] Setting JSON to false
	I0621 18:54:22.616860   39426 mustload.go:65] Loading cluster: ha-406291
	I0621 18:54:22.617191   39426 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:54:22.617305   39426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/ha-406291/config.json ...
	I0621 18:54:22.617478   39426 mustload.go:65] Loading cluster: ha-406291
	I0621 18:54:22.617604   39426 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:54:22.617634   39426 stop.go:39] StopHost: ha-406291-m02
	I0621 18:54:22.618013   39426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:54:22.618060   39426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:54:22.632734   39426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0621 18:54:22.633093   39426 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:54:22.633583   39426 main.go:141] libmachine: Using API Version  1
	I0621 18:54:22.633608   39426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:54:22.633974   39426 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:54:22.636444   39426 out.go:177] * Stopping node "ha-406291-m02"  ...
	I0621 18:54:22.637714   39426 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0621 18:54:22.637775   39426 main.go:141] libmachine: (ha-406291-m02) Calling .DriverName
	I0621 18:54:22.638008   39426 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0621 18:54:22.638042   39426 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHHostname
	I0621 18:54:22.640848   39426 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:54:22.641312   39426 main.go:141] libmachine: (ha-406291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:9a:09", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:41:35 +0000 UTC Type:0 Mac:52:54:00:a6:9a:09 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-406291-m02 Clientid:01:52:54:00:a6:9a:09}
	I0621 18:54:22.641349   39426 main.go:141] libmachine: (ha-406291-m02) DBG | domain ha-406291-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:a6:9a:09 in network mk-ha-406291
	I0621 18:54:22.641467   39426 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHPort
	I0621 18:54:22.641654   39426 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHKeyPath
	I0621 18:54:22.641869   39426 main.go:141] libmachine: (ha-406291-m02) Calling .GetSSHUsername
	I0621 18:54:22.641990   39426 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291-m02/id_rsa Username:docker}
	I0621 18:54:22.724429   39426 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0621 18:54:22.777542   39426 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0621 18:54:22.830662   39426 main.go:141] libmachine: Stopping "ha-406291-m02"...
	I0621 18:54:22.830688   39426 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:54:22.832094   39426 main.go:141] libmachine: (ha-406291-m02) Calling .Stop
	I0621 18:54:22.835227   39426 main.go:141] libmachine: (ha-406291-m02) Waiting for machine to stop 0/120
	I0621 18:54:23.836924   39426 main.go:141] libmachine: (ha-406291-m02) Calling .GetState
	I0621 18:54:23.838145   39426 main.go:141] libmachine: Machine "ha-406291-m02" was stopped.
	I0621 18:54:23.838162   39426 stop.go:75] duration metric: took 1.200449096s to stop
	I0621 18:54:23.838200   39426 stop.go:39] StopHost: ha-406291
	I0621 18:54:23.838495   39426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:54:23.838540   39426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:54:23.853313   39426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	I0621 18:54:23.853763   39426 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:54:23.854276   39426 main.go:141] libmachine: Using API Version  1
	I0621 18:54:23.854299   39426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:54:23.854607   39426 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:54:23.856745   39426 out.go:177] * Stopping node "ha-406291"  ...
	I0621 18:54:23.858010   39426 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0621 18:54:23.858032   39426 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:54:23.858244   39426 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0621 18:54:23.858275   39426 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:54:23.861277   39426 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:54:23.861867   39426 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:54:23.861899   39426 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:54:23.862033   39426 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:54:23.862222   39426 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:54:23.862372   39426 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:54:23.862501   39426 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}
	I0621 18:54:23.940798   39426 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0621 18:54:23.994295   39426 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0621 18:54:24.048989   39426 main.go:141] libmachine: Stopping "ha-406291"...
	I0621 18:54:24.049013   39426 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:54:24.050713   39426 main.go:141] libmachine: (ha-406291) Calling .Stop
	I0621 18:54:24.053825   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 0/120
	I0621 18:54:25.055372   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 1/120
	I0621 18:54:26.056867   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 2/120
	I0621 18:54:27.058321   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 3/120
	I0621 18:54:28.059572   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 4/120
	I0621 18:54:29.061327   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 5/120
	I0621 18:54:30.063133   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 6/120
	I0621 18:54:31.064646   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 7/120
	I0621 18:54:32.066108   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 8/120
	I0621 18:54:33.067456   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 9/120
	I0621 18:54:34.069482   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 10/120
	I0621 18:54:35.071013   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 11/120
	I0621 18:54:36.072371   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 12/120
	I0621 18:54:37.074055   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 13/120
	I0621 18:54:38.075386   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 14/120
	I0621 18:54:39.077114   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 15/120
	I0621 18:54:40.078760   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 16/120
	I0621 18:54:41.080096   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 17/120
	I0621 18:54:42.081634   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 18/120
	I0621 18:54:43.082977   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 19/120
	I0621 18:54:44.084698   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 20/120
	I0621 18:54:45.086188   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 21/120
	I0621 18:54:46.088325   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 22/120
	I0621 18:54:47.090028   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 23/120
	I0621 18:54:48.091518   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 24/120
	I0621 18:54:49.093267   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 25/120
	I0621 18:54:50.094760   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 26/120
	I0621 18:54:51.096329   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 27/120
	I0621 18:54:52.097778   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 28/120
	I0621 18:54:53.099221   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 29/120
	I0621 18:54:54.101152   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 30/120
	I0621 18:54:55.102560   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 31/120
	I0621 18:54:56.103945   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 32/120
	I0621 18:54:57.105282   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 33/120
	I0621 18:54:58.106718   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 34/120
	I0621 18:54:59.108664   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 35/120
	I0621 18:55:00.110119   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 36/120
	I0621 18:55:01.111656   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 37/120
	I0621 18:55:02.113574   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 38/120
	I0621 18:55:03.115416   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 39/120
	I0621 18:55:04.117478   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 40/120
	I0621 18:55:05.119170   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 41/120
	I0621 18:55:06.120906   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 42/120
	I0621 18:55:07.122656   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 43/120
	I0621 18:55:08.124312   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 44/120
	I0621 18:55:09.126104   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 45/120
	I0621 18:55:10.127660   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 46/120
	I0621 18:55:11.129352   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 47/120
	I0621 18:55:12.130753   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 48/120
	I0621 18:55:13.132310   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 49/120
	I0621 18:55:14.134250   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 50/120
	I0621 18:55:15.135844   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 51/120
	I0621 18:55:16.137484   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 52/120
	I0621 18:55:17.139120   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 53/120
	I0621 18:55:18.140587   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 54/120
	I0621 18:55:19.142923   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 55/120
	I0621 18:55:20.144542   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 56/120
	I0621 18:55:21.145992   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 57/120
	I0621 18:55:22.147444   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 58/120
	I0621 18:55:23.149127   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 59/120
	I0621 18:55:24.151082   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 60/120
	I0621 18:55:25.152437   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 61/120
	I0621 18:55:26.154011   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 62/120
	I0621 18:55:27.155676   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 63/120
	I0621 18:55:28.157419   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 64/120
	I0621 18:55:29.159492   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 65/120
	I0621 18:55:30.161135   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 66/120
	I0621 18:55:31.162571   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 67/120
	I0621 18:55:32.164276   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 68/120
	I0621 18:55:33.165733   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 69/120
	I0621 18:55:34.167727   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 70/120
	I0621 18:55:35.169031   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 71/120
	I0621 18:55:36.170519   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 72/120
	I0621 18:55:37.172172   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 73/120
	I0621 18:55:38.173871   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 74/120
	I0621 18:55:39.175701   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 75/120
	I0621 18:55:40.177267   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 76/120
	I0621 18:55:41.178723   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 77/120
	I0621 18:55:42.180235   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 78/120
	I0621 18:55:43.181715   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 79/120
	I0621 18:55:44.183717   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 80/120
	I0621 18:55:45.185255   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 81/120
	I0621 18:55:46.186668   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 82/120
	I0621 18:55:47.188452   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 83/120
	I0621 18:55:48.190136   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 84/120
	I0621 18:55:49.192198   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 85/120
	I0621 18:55:50.193819   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 86/120
	I0621 18:55:51.195272   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 87/120
	I0621 18:55:52.196656   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 88/120
	I0621 18:55:53.198016   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 89/120
	I0621 18:55:54.199992   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 90/120
	I0621 18:55:55.201428   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 91/120
	I0621 18:55:56.203379   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 92/120
	I0621 18:55:57.204685   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 93/120
	I0621 18:55:58.206261   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 94/120
	I0621 18:55:59.208169   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 95/120
	I0621 18:56:00.209645   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 96/120
	I0621 18:56:01.211161   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 97/120
	I0621 18:56:02.212642   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 98/120
	I0621 18:56:03.214482   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 99/120
	I0621 18:56:04.216691   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 100/120
	I0621 18:56:05.218324   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 101/120
	I0621 18:56:06.220031   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 102/120
	I0621 18:56:07.221387   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 103/120
	I0621 18:56:08.222930   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 104/120
	I0621 18:56:09.224840   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 105/120
	I0621 18:56:10.226193   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 106/120
	I0621 18:56:11.227893   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 107/120
	I0621 18:56:12.229632   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 108/120
	I0621 18:56:13.231183   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 109/120
	I0621 18:56:14.233424   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 110/120
	I0621 18:56:15.234836   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 111/120
	I0621 18:56:16.236414   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 112/120
	I0621 18:56:17.237898   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 113/120
	I0621 18:56:18.239279   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 114/120
	I0621 18:56:19.241297   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 115/120
	I0621 18:56:20.242828   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 116/120
	I0621 18:56:21.244212   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 117/120
	I0621 18:56:22.246157   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 118/120
	I0621 18:56:23.247685   39426 main.go:141] libmachine: (ha-406291) Waiting for machine to stop 119/120
	I0621 18:56:24.248896   39426 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0621 18:56:24.248963   39426 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0621 18:56:24.251384   39426 out.go:177] 
	W0621 18:56:24.253092   39426 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0621 18:56:24.253104   39426 out.go:239] * 
	* 
	W0621 18:56:24.255022   39426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 18:56:24.256323   39426 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-406291 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr: signal: killed (18.15596024s)

                                                
                                                
** stderr ** 
	I0621 18:56:24.301521   39900 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:56:24.301647   39900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:56:24.301657   39900 out.go:304] Setting ErrFile to fd 2...
	I0621 18:56:24.301663   39900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:56:24.301904   39900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:56:24.302085   39900 out.go:298] Setting JSON to false
	I0621 18:56:24.302107   39900 mustload.go:65] Loading cluster: ha-406291
	I0621 18:56:24.302158   39900 notify.go:220] Checking for updates...
	I0621 18:56:24.302648   39900 config.go:182] Loaded profile config "ha-406291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:56:24.302670   39900 status.go:255] checking status of ha-406291 ...
	I0621 18:56:24.303090   39900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:56:24.303149   39900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:56:24.323076   39900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32895
	I0621 18:56:24.323458   39900 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:56:24.324116   39900 main.go:141] libmachine: Using API Version  1
	I0621 18:56:24.324145   39900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:56:24.324502   39900 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:56:24.324730   39900 main.go:141] libmachine: (ha-406291) Calling .GetState
	I0621 18:56:24.326228   39900 status.go:330] ha-406291 host status = "Running" (err=<nil>)
	I0621 18:56:24.326248   39900 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:56:24.326539   39900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:56:24.326572   39900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:56:24.340907   39900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38345
	I0621 18:56:24.341338   39900 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:56:24.341758   39900 main.go:141] libmachine: Using API Version  1
	I0621 18:56:24.341777   39900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:56:24.342122   39900 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:56:24.342316   39900 main.go:141] libmachine: (ha-406291) Calling .GetIP
	I0621 18:56:24.344974   39900 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:56:24.345531   39900 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:56:24.345570   39900 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:56:24.345729   39900 host.go:66] Checking if "ha-406291" exists ...
	I0621 18:56:24.346049   39900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:56:24.346091   39900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:56:24.360709   39900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46473
	I0621 18:56:24.361153   39900 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:56:24.361606   39900 main.go:141] libmachine: Using API Version  1
	I0621 18:56:24.361630   39900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:56:24.361990   39900 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:56:24.362168   39900 main.go:141] libmachine: (ha-406291) Calling .DriverName
	I0621 18:56:24.362387   39900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 18:56:24.362411   39900 main.go:141] libmachine: (ha-406291) Calling .GetSSHHostname
	I0621 18:56:24.365346   39900 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:56:24.365764   39900 main.go:141] libmachine: (ha-406291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:dc:46", ip: ""} in network mk-ha-406291: {Iface:virbr1 ExpiryTime:2024-06-21 19:26:56 +0000 UTC Type:0 Mac:52:54:00:38:dc:46 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-406291 Clientid:01:52:54:00:38:dc:46}
	I0621 18:56:24.365792   39900 main.go:141] libmachine: (ha-406291) DBG | domain ha-406291 has defined IP address 192.168.39.198 and MAC address 52:54:00:38:dc:46 in network mk-ha-406291
	I0621 18:56:24.365954   39900 main.go:141] libmachine: (ha-406291) Calling .GetSSHPort
	I0621 18:56:24.366105   39900 main.go:141] libmachine: (ha-406291) Calling .GetSSHKeyPath
	I0621 18:56:24.366255   39900 main.go:141] libmachine: (ha-406291) Calling .GetSSHUsername
	I0621 18:56:24.366367   39900 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/ha-406291/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-406291 status -v=7 --alsologtostderr" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-406291 -n ha-406291: exit status 3 (3.783560925s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0621 18:56:46.198167   39994 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	E0621 18:56:46.198188   39994 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-406291" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopCluster (143.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (300.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-851952
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-851952
E0621 19:05:54.862252   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-851952: exit status 82 (2m1.899044937s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-851952-m03"  ...
	* Stopping node "multinode-851952-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-851952" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851952 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851952 --wait=true -v=8 --alsologtostderr: (2m56.294749071s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-851952
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-851952 -n multinode-851952
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-851952 logs -n 25: (1.403502196s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m02:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile293116882/001/cp-test_multinode-851952-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m02:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952:/home/docker/cp-test_multinode-851952-m02_multinode-851952.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n multinode-851952 sudo cat                                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /home/docker/cp-test_multinode-851952-m02_multinode-851952.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m02:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03:/home/docker/cp-test_multinode-851952-m02_multinode-851952-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n multinode-851952-m03 sudo cat                                   | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /home/docker/cp-test_multinode-851952-m02_multinode-851952-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp testdata/cp-test.txt                                                | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m03:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile293116882/001/cp-test_multinode-851952-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m03:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952:/home/docker/cp-test_multinode-851952-m03_multinode-851952.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n multinode-851952 sudo cat                                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /home/docker/cp-test_multinode-851952-m03_multinode-851952.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m03:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m02:/home/docker/cp-test_multinode-851952-m03_multinode-851952-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n multinode-851952-m02 sudo cat                                   | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /home/docker/cp-test_multinode-851952-m03_multinode-851952-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-851952 node stop m03                                                          | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	| node    | multinode-851952 node start                                                             | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:04 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-851952                                                                | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:04 UTC |                     |
	| stop    | -p multinode-851952                                                                     | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:04 UTC |                     |
	| start   | -p multinode-851952                                                                     | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:06 UTC | 21 Jun 24 19:09 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-851952                                                                | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:09 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 19:06:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 19:06:20.045149   46765 out.go:291] Setting OutFile to fd 1 ...
	I0621 19:06:20.045553   46765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:06:20.045564   46765 out.go:304] Setting ErrFile to fd 2...
	I0621 19:06:20.045569   46765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:06:20.045786   46765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 19:06:20.046359   46765 out.go:298] Setting JSON to false
	I0621 19:06:20.047239   46765 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6478,"bootTime":1718990302,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 19:06:20.047298   46765 start.go:139] virtualization: kvm guest
	I0621 19:06:20.049572   46765 out.go:177] * [multinode-851952] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 19:06:20.051045   46765 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 19:06:20.051052   46765 notify.go:220] Checking for updates...
	I0621 19:06:20.052311   46765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 19:06:20.053564   46765 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 19:06:20.055045   46765 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 19:06:20.056361   46765 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 19:06:20.057586   46765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 19:06:20.059244   46765 config.go:182] Loaded profile config "multinode-851952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:06:20.059351   46765 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 19:06:20.059761   46765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:06:20.059831   46765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:06:20.074865   46765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0621 19:06:20.075341   46765 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:06:20.075902   46765 main.go:141] libmachine: Using API Version  1
	I0621 19:06:20.075926   46765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:06:20.076245   46765 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:06:20.076441   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:06:20.110771   46765 out.go:177] * Using the kvm2 driver based on existing profile
	I0621 19:06:20.112015   46765 start.go:297] selected driver: kvm2
	I0621 19:06:20.112036   46765 start.go:901] validating driver "kvm2" against &{Name:multinode-851952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-851952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:06:20.112175   46765 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 19:06:20.112485   46765 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:06:20.112548   46765 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 19:06:20.127074   46765 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 19:06:20.127820   46765 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 19:06:20.127879   46765 cni.go:84] Creating CNI manager for ""
	I0621 19:06:20.127890   46765 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 19:06:20.127978   46765 start.go:340] cluster config:
	{Name:multinode-851952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-851952 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:06:20.128112   46765 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:06:20.129931   46765 out.go:177] * Starting "multinode-851952" primary control-plane node in "multinode-851952" cluster
	I0621 19:06:20.131079   46765 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 19:06:20.131112   46765 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 19:06:20.131121   46765 cache.go:56] Caching tarball of preloaded images
	I0621 19:06:20.131201   46765 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 19:06:20.131211   46765 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 19:06:20.131332   46765 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/config.json ...
	I0621 19:06:20.131519   46765 start.go:360] acquireMachinesLock for multinode-851952: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 19:06:20.131558   46765 start.go:364] duration metric: took 21.852µs to acquireMachinesLock for "multinode-851952"
	I0621 19:06:20.131572   46765 start.go:96] Skipping create...Using existing machine configuration
	I0621 19:06:20.131580   46765 fix.go:54] fixHost starting: 
	I0621 19:06:20.131826   46765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:06:20.131855   46765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:06:20.146031   46765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43453
	I0621 19:06:20.146410   46765 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:06:20.146866   46765 main.go:141] libmachine: Using API Version  1
	I0621 19:06:20.146888   46765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:06:20.147233   46765 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:06:20.147456   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:06:20.147588   46765 main.go:141] libmachine: (multinode-851952) Calling .GetState
	I0621 19:06:20.149044   46765 fix.go:112] recreateIfNeeded on multinode-851952: state=Running err=<nil>
	W0621 19:06:20.149059   46765 fix.go:138] unexpected machine state, will restart: <nil>
	I0621 19:06:20.151173   46765 out.go:177] * Updating the running kvm2 "multinode-851952" VM ...
	I0621 19:06:20.152449   46765 machine.go:94] provisionDockerMachine start ...
	I0621 19:06:20.152470   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:06:20.152656   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.155643   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.156192   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.156224   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.156367   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.156546   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.156678   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.156822   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.157043   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:06:20.157289   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:06:20.157313   46765 main.go:141] libmachine: About to run SSH command:
	hostname
	I0621 19:06:20.266796   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-851952
	
	I0621 19:06:20.266827   46765 main.go:141] libmachine: (multinode-851952) Calling .GetMachineName
	I0621 19:06:20.267069   46765 buildroot.go:166] provisioning hostname "multinode-851952"
	I0621 19:06:20.267089   46765 main.go:141] libmachine: (multinode-851952) Calling .GetMachineName
	I0621 19:06:20.267311   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.269998   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.270402   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.270427   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.270547   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.270723   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.270853   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.270994   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.271168   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:06:20.271400   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:06:20.271419   46765 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-851952 && echo "multinode-851952" | sudo tee /etc/hostname
	I0621 19:06:20.389464   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-851952
	
	I0621 19:06:20.389498   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.392333   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.392718   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.392750   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.392989   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.393156   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.393302   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.393412   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.393565   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:06:20.393740   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:06:20.393755   46765 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-851952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-851952/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-851952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 19:06:20.498431   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 19:06:20.498457   46765 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 19:06:20.498471   46765 buildroot.go:174] setting up certificates
	I0621 19:06:20.498480   46765 provision.go:84] configureAuth start
	I0621 19:06:20.498488   46765 main.go:141] libmachine: (multinode-851952) Calling .GetMachineName
	I0621 19:06:20.498764   46765 main.go:141] libmachine: (multinode-851952) Calling .GetIP
	I0621 19:06:20.501235   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.501562   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.501585   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.501708   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.503796   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.504177   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.504216   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.504273   46765 provision.go:143] copyHostCerts
	I0621 19:06:20.504306   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 19:06:20.504348   46765 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 19:06:20.504356   46765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 19:06:20.504418   46765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 19:06:20.504514   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 19:06:20.504532   46765 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 19:06:20.504539   46765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 19:06:20.504564   46765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 19:06:20.504619   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 19:06:20.504635   46765 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 19:06:20.504641   46765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 19:06:20.504661   46765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 19:06:20.504717   46765 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.multinode-851952 san=[127.0.0.1 192.168.39.146 localhost minikube multinode-851952]
	I0621 19:06:20.797647   46765 provision.go:177] copyRemoteCerts
	I0621 19:06:20.797710   46765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 19:06:20.797732   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.800244   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.800594   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.800631   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.800698   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.800885   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.801065   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.801215   46765 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952/id_rsa Username:docker}
	I0621 19:06:20.888463   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 19:06:20.888528   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 19:06:20.914567   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 19:06:20.914657   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0621 19:06:20.939338   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 19:06:20.939421   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 19:06:20.963344   46765 provision.go:87] duration metric: took 464.853396ms to configureAuth
	I0621 19:06:20.963375   46765 buildroot.go:189] setting minikube options for container-runtime
	I0621 19:06:20.963600   46765 config.go:182] Loaded profile config "multinode-851952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:06:20.963672   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.966795   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.967173   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.967215   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.967347   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.967555   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.967681   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.967803   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.967940   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:06:20.968147   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:06:20.968163   46765 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 19:07:51.817679   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 19:07:51.817712   46765 machine.go:97] duration metric: took 1m31.665249447s to provisionDockerMachine
	I0621 19:07:51.817724   46765 start.go:293] postStartSetup for "multinode-851952" (driver="kvm2")
	I0621 19:07:51.817733   46765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 19:07:51.817767   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:51.818121   46765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 19:07:51.818149   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:07:51.821081   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:51.821664   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:51.821700   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:51.821909   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:07:51.822103   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:51.822293   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:07:51.822541   46765 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952/id_rsa Username:docker}
	I0621 19:07:51.905158   46765 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 19:07:51.909014   46765 command_runner.go:130] > NAME=Buildroot
	I0621 19:07:51.909033   46765 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0621 19:07:51.909039   46765 command_runner.go:130] > ID=buildroot
	I0621 19:07:51.909047   46765 command_runner.go:130] > VERSION_ID=2023.02.9
	I0621 19:07:51.909054   46765 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0621 19:07:51.909159   46765 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 19:07:51.909184   46765 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 19:07:51.909282   46765 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 19:07:51.909360   46765 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 19:07:51.909370   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 19:07:51.909458   46765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 19:07:51.918667   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 19:07:51.942307   46765 start.go:296] duration metric: took 124.571035ms for postStartSetup
	I0621 19:07:51.942345   46765 fix.go:56] duration metric: took 1m31.810765351s for fixHost
	I0621 19:07:51.942363   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:07:51.945262   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:51.945678   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:51.945707   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:51.945863   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:07:51.946055   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:51.946261   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:51.946398   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:07:51.946570   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:07:51.946773   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:07:51.946789   46765 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 19:07:52.050586   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718996872.032188392
	
	I0621 19:07:52.050609   46765 fix.go:216] guest clock: 1718996872.032188392
	I0621 19:07:52.050616   46765 fix.go:229] Guest: 2024-06-21 19:07:52.032188392 +0000 UTC Remote: 2024-06-21 19:07:51.942348587 +0000 UTC m=+91.931891791 (delta=89.839805ms)
	I0621 19:07:52.050635   46765 fix.go:200] guest clock delta is within tolerance: 89.839805ms
	I0621 19:07:52.050640   46765 start.go:83] releasing machines lock for "multinode-851952", held for 1m31.919073483s
	I0621 19:07:52.050656   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:52.050940   46765 main.go:141] libmachine: (multinode-851952) Calling .GetIP
	I0621 19:07:52.053927   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.054324   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:52.054355   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.054482   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:52.055012   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:52.055198   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:52.055293   46765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 19:07:52.055371   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:07:52.055403   46765 ssh_runner.go:195] Run: cat /version.json
	I0621 19:07:52.055425   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:07:52.058334   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.058647   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:52.058669   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.058681   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.058835   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:07:52.059022   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:52.059173   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:52.059196   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.059209   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:07:52.059353   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:07:52.059355   46765 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952/id_rsa Username:docker}
	I0621 19:07:52.059602   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:52.059751   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:07:52.059899   46765 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952/id_rsa Username:docker}
	I0621 19:07:52.167667   46765 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0621 19:07:52.168333   46765 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0621 19:07:52.168487   46765 ssh_runner.go:195] Run: systemctl --version
	I0621 19:07:52.174385   46765 command_runner.go:130] > systemd 252 (252)
	I0621 19:07:52.174414   46765 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0621 19:07:52.174465   46765 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 19:07:52.329284   46765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0621 19:07:52.337836   46765 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0621 19:07:52.337973   46765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 19:07:52.338068   46765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 19:07:52.347385   46765 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0621 19:07:52.347411   46765 start.go:494] detecting cgroup driver to use...
	I0621 19:07:52.347476   46765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 19:07:52.363506   46765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 19:07:52.377050   46765 docker.go:217] disabling cri-docker service (if available) ...
	I0621 19:07:52.377104   46765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 19:07:52.390472   46765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 19:07:52.404662   46765 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 19:07:52.546574   46765 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 19:07:52.692768   46765 docker.go:233] disabling docker service ...
	I0621 19:07:52.692828   46765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 19:07:52.709742   46765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 19:07:52.723537   46765 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 19:07:52.870546   46765 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 19:07:53.019876   46765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 19:07:53.036410   46765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 19:07:53.053814   46765 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0621 19:07:53.054381   46765 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 19:07:53.054442   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.065320   46765 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 19:07:53.065390   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.075254   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.085013   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.095227   46765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 19:07:53.106168   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.116080   46765 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.126788   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.137243   46765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 19:07:53.146715   46765 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0621 19:07:53.146789   46765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 19:07:53.156273   46765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:07:53.295092   46765 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 19:07:53.536431   46765 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 19:07:53.536498   46765 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 19:07:53.541163   46765 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0621 19:07:53.541185   46765 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0621 19:07:53.541208   46765 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0621 19:07:53.541217   46765 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0621 19:07:53.541223   46765 command_runner.go:130] > Access: 2024-06-21 19:07:53.405195340 +0000
	I0621 19:07:53.541228   46765 command_runner.go:130] > Modify: 2024-06-21 19:07:53.405195340 +0000
	I0621 19:07:53.541242   46765 command_runner.go:130] > Change: 2024-06-21 19:07:53.405195340 +0000
	I0621 19:07:53.541251   46765 command_runner.go:130] >  Birth: -
	I0621 19:07:53.541444   46765 start.go:562] Will wait 60s for crictl version
	I0621 19:07:53.541516   46765 ssh_runner.go:195] Run: which crictl
	I0621 19:07:53.545026   46765 command_runner.go:130] > /usr/bin/crictl
	I0621 19:07:53.545092   46765 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 19:07:53.580590   46765 command_runner.go:130] > Version:  0.1.0
	I0621 19:07:53.580611   46765 command_runner.go:130] > RuntimeName:  cri-o
	I0621 19:07:53.580616   46765 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0621 19:07:53.580629   46765 command_runner.go:130] > RuntimeApiVersion:  v1
	I0621 19:07:53.582673   46765 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 19:07:53.582754   46765 ssh_runner.go:195] Run: crio --version
	I0621 19:07:53.610851   46765 command_runner.go:130] > crio version 1.29.1
	I0621 19:07:53.610872   46765 command_runner.go:130] > Version:        1.29.1
	I0621 19:07:53.610878   46765 command_runner.go:130] > GitCommit:      unknown
	I0621 19:07:53.610883   46765 command_runner.go:130] > GitCommitDate:  unknown
	I0621 19:07:53.610887   46765 command_runner.go:130] > GitTreeState:   clean
	I0621 19:07:53.610899   46765 command_runner.go:130] > BuildDate:      2024-06-21T04:36:35Z
	I0621 19:07:53.610904   46765 command_runner.go:130] > GoVersion:      go1.21.6
	I0621 19:07:53.610908   46765 command_runner.go:130] > Compiler:       gc
	I0621 19:07:53.610912   46765 command_runner.go:130] > Platform:       linux/amd64
	I0621 19:07:53.610916   46765 command_runner.go:130] > Linkmode:       dynamic
	I0621 19:07:53.610920   46765 command_runner.go:130] > BuildTags:      
	I0621 19:07:53.610924   46765 command_runner.go:130] >   containers_image_ostree_stub
	I0621 19:07:53.610928   46765 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0621 19:07:53.610931   46765 command_runner.go:130] >   btrfs_noversion
	I0621 19:07:53.610936   46765 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0621 19:07:53.610940   46765 command_runner.go:130] >   libdm_no_deferred_remove
	I0621 19:07:53.610946   46765 command_runner.go:130] >   seccomp
	I0621 19:07:53.610950   46765 command_runner.go:130] > LDFlags:          unknown
	I0621 19:07:53.610957   46765 command_runner.go:130] > SeccompEnabled:   true
	I0621 19:07:53.610961   46765 command_runner.go:130] > AppArmorEnabled:  false
	I0621 19:07:53.611033   46765 ssh_runner.go:195] Run: crio --version
	I0621 19:07:53.639215   46765 command_runner.go:130] > crio version 1.29.1
	I0621 19:07:53.639322   46765 command_runner.go:130] > Version:        1.29.1
	I0621 19:07:53.639490   46765 command_runner.go:130] > GitCommit:      unknown
	I0621 19:07:53.639511   46765 command_runner.go:130] > GitCommitDate:  unknown
	I0621 19:07:53.639518   46765 command_runner.go:130] > GitTreeState:   clean
	I0621 19:07:53.639527   46765 command_runner.go:130] > BuildDate:      2024-06-21T04:36:35Z
	I0621 19:07:53.639534   46765 command_runner.go:130] > GoVersion:      go1.21.6
	I0621 19:07:53.639540   46765 command_runner.go:130] > Compiler:       gc
	I0621 19:07:53.639553   46765 command_runner.go:130] > Platform:       linux/amd64
	I0621 19:07:53.640387   46765 command_runner.go:130] > Linkmode:       dynamic
	I0621 19:07:53.640407   46765 command_runner.go:130] > BuildTags:      
	I0621 19:07:53.640507   46765 command_runner.go:130] >   containers_image_ostree_stub
	I0621 19:07:53.640790   46765 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0621 19:07:53.640806   46765 command_runner.go:130] >   btrfs_noversion
	I0621 19:07:53.640811   46765 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0621 19:07:53.640816   46765 command_runner.go:130] >   libdm_no_deferred_remove
	I0621 19:07:53.640819   46765 command_runner.go:130] >   seccomp
	I0621 19:07:53.640824   46765 command_runner.go:130] > LDFlags:          unknown
	I0621 19:07:53.640828   46765 command_runner.go:130] > SeccompEnabled:   true
	I0621 19:07:53.640833   46765 command_runner.go:130] > AppArmorEnabled:  false
	I0621 19:07:53.643712   46765 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 19:07:53.645035   46765 main.go:141] libmachine: (multinode-851952) Calling .GetIP
	I0621 19:07:53.647772   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:53.648174   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:53.648202   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:53.648416   46765 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 19:07:53.652556   46765 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0621 19:07:53.652655   46765 kubeadm.go:877] updating cluster {Name:multinode-851952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-851952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 19:07:53.652763   46765 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 19:07:53.652801   46765 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 19:07:53.700872   46765 command_runner.go:130] > {
	I0621 19:07:53.700893   46765 command_runner.go:130] >   "images": [
	I0621 19:07:53.700899   46765 command_runner.go:130] >     {
	I0621 19:07:53.700913   46765 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0621 19:07:53.700921   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.700929   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0621 19:07:53.700932   46765 command_runner.go:130] >       ],
	I0621 19:07:53.700936   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.700945   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0621 19:07:53.700952   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0621 19:07:53.700955   46765 command_runner.go:130] >       ],
	I0621 19:07:53.700961   46765 command_runner.go:130] >       "size": "65908273",
	I0621 19:07:53.700965   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.700968   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.700973   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.700980   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.700983   46765 command_runner.go:130] >     },
	I0621 19:07:53.700987   46765 command_runner.go:130] >     {
	I0621 19:07:53.700992   46765 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0621 19:07:53.700999   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701004   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0621 19:07:53.701010   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701014   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701021   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0621 19:07:53.701028   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0621 19:07:53.701034   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701038   46765 command_runner.go:130] >       "size": "1363676",
	I0621 19:07:53.701054   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.701072   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701078   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701082   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701086   46765 command_runner.go:130] >     },
	I0621 19:07:53.701089   46765 command_runner.go:130] >     {
	I0621 19:07:53.701095   46765 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0621 19:07:53.701098   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701117   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0621 19:07:53.701127   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701131   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701141   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0621 19:07:53.701160   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0621 19:07:53.701167   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701171   46765 command_runner.go:130] >       "size": "31470524",
	I0621 19:07:53.701176   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.701180   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701184   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701190   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701193   46765 command_runner.go:130] >     },
	I0621 19:07:53.701197   46765 command_runner.go:130] >     {
	I0621 19:07:53.701203   46765 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0621 19:07:53.701210   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701215   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0621 19:07:53.701221   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701225   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701234   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0621 19:07:53.701247   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0621 19:07:53.701253   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701257   46765 command_runner.go:130] >       "size": "61245718",
	I0621 19:07:53.701261   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.701265   46765 command_runner.go:130] >       "username": "nonroot",
	I0621 19:07:53.701269   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701273   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701276   46765 command_runner.go:130] >     },
	I0621 19:07:53.701279   46765 command_runner.go:130] >     {
	I0621 19:07:53.701285   46765 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0621 19:07:53.701291   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701295   46765 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0621 19:07:53.701301   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701305   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701314   46765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0621 19:07:53.701324   46765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0621 19:07:53.701329   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701333   46765 command_runner.go:130] >       "size": "150779692",
	I0621 19:07:53.701339   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701343   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.701346   46765 command_runner.go:130] >       },
	I0621 19:07:53.701350   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701354   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701358   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701363   46765 command_runner.go:130] >     },
	I0621 19:07:53.701366   46765 command_runner.go:130] >     {
	I0621 19:07:53.701372   46765 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0621 19:07:53.701376   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701382   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0621 19:07:53.701387   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701392   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701399   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0621 19:07:53.701408   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0621 19:07:53.701414   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701419   46765 command_runner.go:130] >       "size": "117609954",
	I0621 19:07:53.701424   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701428   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.701434   46765 command_runner.go:130] >       },
	I0621 19:07:53.701439   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701445   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701449   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701452   46765 command_runner.go:130] >     },
	I0621 19:07:53.701455   46765 command_runner.go:130] >     {
	I0621 19:07:53.701461   46765 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0621 19:07:53.701467   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701472   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0621 19:07:53.701478   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701482   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701490   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0621 19:07:53.701500   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0621 19:07:53.701505   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701509   46765 command_runner.go:130] >       "size": "112194888",
	I0621 19:07:53.701515   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701519   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.701525   46765 command_runner.go:130] >       },
	I0621 19:07:53.701529   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701535   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701538   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701541   46765 command_runner.go:130] >     },
	I0621 19:07:53.701545   46765 command_runner.go:130] >     {
	I0621 19:07:53.701550   46765 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0621 19:07:53.701556   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701561   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0621 19:07:53.701565   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701568   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701587   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0621 19:07:53.701596   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0621 19:07:53.701600   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701604   46765 command_runner.go:130] >       "size": "85953433",
	I0621 19:07:53.701608   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.701615   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701619   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701622   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701626   46765 command_runner.go:130] >     },
	I0621 19:07:53.701629   46765 command_runner.go:130] >     {
	I0621 19:07:53.701634   46765 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0621 19:07:53.701638   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701642   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0621 19:07:53.701645   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701649   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701655   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0621 19:07:53.701662   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0621 19:07:53.701668   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701672   46765 command_runner.go:130] >       "size": "63051080",
	I0621 19:07:53.701680   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701686   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.701689   46765 command_runner.go:130] >       },
	I0621 19:07:53.701693   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701697   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701701   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701705   46765 command_runner.go:130] >     },
	I0621 19:07:53.701708   46765 command_runner.go:130] >     {
	I0621 19:07:53.701714   46765 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0621 19:07:53.701718   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701722   46765 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0621 19:07:53.701725   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701729   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701735   46765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0621 19:07:53.701744   46765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0621 19:07:53.701748   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701751   46765 command_runner.go:130] >       "size": "750414",
	I0621 19:07:53.701755   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701759   46765 command_runner.go:130] >         "value": "65535"
	I0621 19:07:53.701763   46765 command_runner.go:130] >       },
	I0621 19:07:53.701767   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701771   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701778   46765 command_runner.go:130] >       "pinned": true
	I0621 19:07:53.701786   46765 command_runner.go:130] >     }
	I0621 19:07:53.701791   46765 command_runner.go:130] >   ]
	I0621 19:07:53.701808   46765 command_runner.go:130] > }
	I0621 19:07:53.702309   46765 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 19:07:53.702325   46765 crio.go:433] Images already preloaded, skipping extraction
	I0621 19:07:53.702368   46765 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 19:07:53.740443   46765 command_runner.go:130] > {
	I0621 19:07:53.740468   46765 command_runner.go:130] >   "images": [
	I0621 19:07:53.740472   46765 command_runner.go:130] >     {
	I0621 19:07:53.740480   46765 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0621 19:07:53.740485   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740491   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0621 19:07:53.740494   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740498   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740508   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0621 19:07:53.740515   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0621 19:07:53.740518   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740523   46765 command_runner.go:130] >       "size": "65908273",
	I0621 19:07:53.740527   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.740530   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.740539   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740543   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740547   46765 command_runner.go:130] >     },
	I0621 19:07:53.740551   46765 command_runner.go:130] >     {
	I0621 19:07:53.740560   46765 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0621 19:07:53.740564   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740569   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0621 19:07:53.740575   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740580   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740588   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0621 19:07:53.740596   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0621 19:07:53.740601   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740605   46765 command_runner.go:130] >       "size": "1363676",
	I0621 19:07:53.740610   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.740617   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.740626   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740632   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740637   46765 command_runner.go:130] >     },
	I0621 19:07:53.740640   46765 command_runner.go:130] >     {
	I0621 19:07:53.740647   46765 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0621 19:07:53.740651   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740658   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0621 19:07:53.740662   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740667   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740674   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0621 19:07:53.740685   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0621 19:07:53.740689   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740694   46765 command_runner.go:130] >       "size": "31470524",
	I0621 19:07:53.740701   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.740706   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.740712   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740717   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740723   46765 command_runner.go:130] >     },
	I0621 19:07:53.740727   46765 command_runner.go:130] >     {
	I0621 19:07:53.740736   46765 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0621 19:07:53.740743   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740748   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0621 19:07:53.740755   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740759   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740770   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0621 19:07:53.740781   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0621 19:07:53.740792   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740799   46765 command_runner.go:130] >       "size": "61245718",
	I0621 19:07:53.740804   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.740811   46765 command_runner.go:130] >       "username": "nonroot",
	I0621 19:07:53.740816   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740822   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740826   46765 command_runner.go:130] >     },
	I0621 19:07:53.740833   46765 command_runner.go:130] >     {
	I0621 19:07:53.740839   46765 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0621 19:07:53.740845   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740851   46765 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0621 19:07:53.740857   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740861   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740871   46765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0621 19:07:53.740885   46765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0621 19:07:53.740896   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740904   46765 command_runner.go:130] >       "size": "150779692",
	I0621 19:07:53.740912   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.740917   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.740923   46765 command_runner.go:130] >       },
	I0621 19:07:53.740928   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.740934   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740939   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740945   46765 command_runner.go:130] >     },
	I0621 19:07:53.740949   46765 command_runner.go:130] >     {
	I0621 19:07:53.740958   46765 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0621 19:07:53.740966   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740971   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0621 19:07:53.740978   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740982   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740992   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0621 19:07:53.741002   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0621 19:07:53.741009   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741014   46765 command_runner.go:130] >       "size": "117609954",
	I0621 19:07:53.741020   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.741025   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.741031   46765 command_runner.go:130] >       },
	I0621 19:07:53.741036   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741044   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741052   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.741056   46765 command_runner.go:130] >     },
	I0621 19:07:53.741062   46765 command_runner.go:130] >     {
	I0621 19:07:53.741068   46765 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0621 19:07:53.741075   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.741081   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0621 19:07:53.741087   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741092   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.741102   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0621 19:07:53.741112   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0621 19:07:53.741119   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741130   46765 command_runner.go:130] >       "size": "112194888",
	I0621 19:07:53.741137   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.741141   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.741147   46765 command_runner.go:130] >       },
	I0621 19:07:53.741152   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741159   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741165   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.741173   46765 command_runner.go:130] >     },
	I0621 19:07:53.741177   46765 command_runner.go:130] >     {
	I0621 19:07:53.741186   46765 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0621 19:07:53.741193   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.741198   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0621 19:07:53.741205   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741209   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.741226   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0621 19:07:53.741236   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0621 19:07:53.741243   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741247   46765 command_runner.go:130] >       "size": "85953433",
	I0621 19:07:53.741254   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.741258   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741265   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741270   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.741276   46765 command_runner.go:130] >     },
	I0621 19:07:53.741280   46765 command_runner.go:130] >     {
	I0621 19:07:53.741290   46765 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0621 19:07:53.741297   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.741302   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0621 19:07:53.741308   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741313   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.741323   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0621 19:07:53.741333   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0621 19:07:53.741339   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741344   46765 command_runner.go:130] >       "size": "63051080",
	I0621 19:07:53.741350   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.741354   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.741360   46765 command_runner.go:130] >       },
	I0621 19:07:53.741364   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741371   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741376   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.741384   46765 command_runner.go:130] >     },
	I0621 19:07:53.741395   46765 command_runner.go:130] >     {
	I0621 19:07:53.741407   46765 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0621 19:07:53.741417   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.741428   46765 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0621 19:07:53.741439   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741447   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.741459   46765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0621 19:07:53.741469   46765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0621 19:07:53.741473   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741477   46765 command_runner.go:130] >       "size": "750414",
	I0621 19:07:53.741481   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.741486   46765 command_runner.go:130] >         "value": "65535"
	I0621 19:07:53.741492   46765 command_runner.go:130] >       },
	I0621 19:07:53.741502   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741509   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741519   46765 command_runner.go:130] >       "pinned": true
	I0621 19:07:53.741525   46765 command_runner.go:130] >     }
	I0621 19:07:53.741536   46765 command_runner.go:130] >   ]
	I0621 19:07:53.741542   46765 command_runner.go:130] > }
	I0621 19:07:53.741861   46765 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 19:07:53.741913   46765 cache_images.go:84] Images are preloaded, skipping loading
	I0621 19:07:53.741928   46765 kubeadm.go:928] updating node { 192.168.39.146 8443 v1.30.2 crio true true} ...
	I0621 19:07:53.742044   46765 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-851952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-851952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 19:07:53.742126   46765 ssh_runner.go:195] Run: crio config
	I0621 19:07:53.788616   46765 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0621 19:07:53.788650   46765 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0621 19:07:53.788660   46765 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0621 19:07:53.788665   46765 command_runner.go:130] > #
	I0621 19:07:53.788677   46765 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0621 19:07:53.788685   46765 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0621 19:07:53.788693   46765 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0621 19:07:53.788703   46765 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0621 19:07:53.788708   46765 command_runner.go:130] > # reload'.
	I0621 19:07:53.788718   46765 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0621 19:07:53.788729   46765 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0621 19:07:53.788741   46765 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0621 19:07:53.788751   46765 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0621 19:07:53.788761   46765 command_runner.go:130] > [crio]
	I0621 19:07:53.788772   46765 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0621 19:07:53.788784   46765 command_runner.go:130] > # containers images, in this directory.
	I0621 19:07:53.788814   46765 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0621 19:07:53.788841   46765 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0621 19:07:53.789641   46765 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0621 19:07:53.789666   46765 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0621 19:07:53.790476   46765 command_runner.go:130] > # imagestore = ""
	I0621 19:07:53.790492   46765 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0621 19:07:53.790501   46765 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0621 19:07:53.790647   46765 command_runner.go:130] > storage_driver = "overlay"
	I0621 19:07:53.790695   46765 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0621 19:07:53.790713   46765 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0621 19:07:53.790722   46765 command_runner.go:130] > storage_option = [
	I0621 19:07:53.790786   46765 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0621 19:07:53.790825   46765 command_runner.go:130] > ]
	I0621 19:07:53.790841   46765 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0621 19:07:53.790854   46765 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0621 19:07:53.791037   46765 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0621 19:07:53.791050   46765 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0621 19:07:53.791059   46765 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0621 19:07:53.791089   46765 command_runner.go:130] > # always happen on a node reboot
	I0621 19:07:53.791303   46765 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0621 19:07:53.791326   46765 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0621 19:07:53.791339   46765 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0621 19:07:53.791349   46765 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0621 19:07:53.791477   46765 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0621 19:07:53.791499   46765 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0621 19:07:53.791537   46765 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0621 19:07:53.791675   46765 command_runner.go:130] > # internal_wipe = true
	I0621 19:07:53.791691   46765 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0621 19:07:53.791697   46765 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0621 19:07:53.791779   46765 command_runner.go:130] > # internal_repair = false
	I0621 19:07:53.791795   46765 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0621 19:07:53.791803   46765 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0621 19:07:53.791813   46765 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0621 19:07:53.791874   46765 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0621 19:07:53.791888   46765 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0621 19:07:53.791894   46765 command_runner.go:130] > [crio.api]
	I0621 19:07:53.791902   46765 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0621 19:07:53.792075   46765 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0621 19:07:53.792088   46765 command_runner.go:130] > # IP address on which the stream server will listen.
	I0621 19:07:53.792095   46765 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0621 19:07:53.792106   46765 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0621 19:07:53.792119   46765 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0621 19:07:53.792315   46765 command_runner.go:130] > # stream_port = "0"
	I0621 19:07:53.792330   46765 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0621 19:07:53.792505   46765 command_runner.go:130] > # stream_enable_tls = false
	I0621 19:07:53.792516   46765 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0621 19:07:53.792680   46765 command_runner.go:130] > # stream_idle_timeout = ""
	I0621 19:07:53.792695   46765 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0621 19:07:53.792705   46765 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0621 19:07:53.792710   46765 command_runner.go:130] > # minutes.
	I0621 19:07:53.792831   46765 command_runner.go:130] > # stream_tls_cert = ""
	I0621 19:07:53.792845   46765 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0621 19:07:53.792855   46765 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0621 19:07:53.792985   46765 command_runner.go:130] > # stream_tls_key = ""
	I0621 19:07:53.792996   46765 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0621 19:07:53.793002   46765 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0621 19:07:53.793017   46765 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0621 19:07:53.793308   46765 command_runner.go:130] > # stream_tls_ca = ""
	I0621 19:07:53.793331   46765 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0621 19:07:53.793339   46765 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0621 19:07:53.793355   46765 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0621 19:07:53.793454   46765 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0621 19:07:53.793471   46765 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0621 19:07:53.793480   46765 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0621 19:07:53.793486   46765 command_runner.go:130] > [crio.runtime]
	I0621 19:07:53.793499   46765 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0621 19:07:53.793510   46765 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0621 19:07:53.793520   46765 command_runner.go:130] > # "nofile=1024:2048"
	I0621 19:07:53.793529   46765 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0621 19:07:53.793565   46765 command_runner.go:130] > # default_ulimits = [
	I0621 19:07:53.793644   46765 command_runner.go:130] > # ]
	I0621 19:07:53.793654   46765 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0621 19:07:53.793886   46765 command_runner.go:130] > # no_pivot = false
	I0621 19:07:53.793897   46765 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0621 19:07:53.793906   46765 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0621 19:07:53.794067   46765 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0621 19:07:53.794082   46765 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0621 19:07:53.794090   46765 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0621 19:07:53.794102   46765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0621 19:07:53.794181   46765 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0621 19:07:53.794196   46765 command_runner.go:130] > # Cgroup setting for conmon
	I0621 19:07:53.794207   46765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0621 19:07:53.794431   46765 command_runner.go:130] > conmon_cgroup = "pod"
	I0621 19:07:53.794445   46765 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0621 19:07:53.794451   46765 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0621 19:07:53.794457   46765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0621 19:07:53.794460   46765 command_runner.go:130] > conmon_env = [
	I0621 19:07:53.794668   46765 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0621 19:07:53.794679   46765 command_runner.go:130] > ]
	I0621 19:07:53.794688   46765 command_runner.go:130] > # Additional environment variables to set for all the
	I0621 19:07:53.794695   46765 command_runner.go:130] > # containers. These are overridden if set in the
	I0621 19:07:53.794703   46765 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0621 19:07:53.794826   46765 command_runner.go:130] > # default_env = [
	I0621 19:07:53.794947   46765 command_runner.go:130] > # ]
	I0621 19:07:53.794965   46765 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0621 19:07:53.794978   46765 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0621 19:07:53.795132   46765 command_runner.go:130] > # selinux = false
	I0621 19:07:53.795149   46765 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0621 19:07:53.795160   46765 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0621 19:07:53.795188   46765 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0621 19:07:53.795357   46765 command_runner.go:130] > # seccomp_profile = ""
	I0621 19:07:53.795375   46765 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0621 19:07:53.795382   46765 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0621 19:07:53.795391   46765 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0621 19:07:53.795407   46765 command_runner.go:130] > # which might increase security.
	I0621 19:07:53.795418   46765 command_runner.go:130] > # This option is currently deprecated,
	I0621 19:07:53.795434   46765 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0621 19:07:53.795446   46765 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0621 19:07:53.795457   46765 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0621 19:07:53.795470   46765 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0621 19:07:53.795478   46765 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0621 19:07:53.795489   46765 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0621 19:07:53.795504   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.795661   46765 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0621 19:07:53.795680   46765 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0621 19:07:53.795688   46765 command_runner.go:130] > # the cgroup blockio controller.
	I0621 19:07:53.795817   46765 command_runner.go:130] > # blockio_config_file = ""
	I0621 19:07:53.795834   46765 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0621 19:07:53.795842   46765 command_runner.go:130] > # blockio parameters.
	I0621 19:07:53.796044   46765 command_runner.go:130] > # blockio_reload = false
	I0621 19:07:53.796067   46765 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0621 19:07:53.796074   46765 command_runner.go:130] > # irqbalance daemon.
	I0621 19:07:53.796197   46765 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0621 19:07:53.796216   46765 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0621 19:07:53.796227   46765 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0621 19:07:53.796240   46765 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0621 19:07:53.796419   46765 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0621 19:07:53.796439   46765 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0621 19:07:53.796448   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.796553   46765 command_runner.go:130] > # rdt_config_file = ""
	I0621 19:07:53.796568   46765 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0621 19:07:53.796638   46765 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0621 19:07:53.796665   46765 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0621 19:07:53.796795   46765 command_runner.go:130] > # separate_pull_cgroup = ""
	I0621 19:07:53.796810   46765 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0621 19:07:53.796820   46765 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0621 19:07:53.796830   46765 command_runner.go:130] > # will be added.
	I0621 19:07:53.798104   46765 command_runner.go:130] > # default_capabilities = [
	I0621 19:07:53.798114   46765 command_runner.go:130] > # 	"CHOWN",
	I0621 19:07:53.798119   46765 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0621 19:07:53.798122   46765 command_runner.go:130] > # 	"FSETID",
	I0621 19:07:53.798126   46765 command_runner.go:130] > # 	"FOWNER",
	I0621 19:07:53.798129   46765 command_runner.go:130] > # 	"SETGID",
	I0621 19:07:53.798133   46765 command_runner.go:130] > # 	"SETUID",
	I0621 19:07:53.798138   46765 command_runner.go:130] > # 	"SETPCAP",
	I0621 19:07:53.798144   46765 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0621 19:07:53.798150   46765 command_runner.go:130] > # 	"KILL",
	I0621 19:07:53.798157   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798175   46765 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0621 19:07:53.798187   46765 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0621 19:07:53.798192   46765 command_runner.go:130] > # add_inheritable_capabilities = false
	I0621 19:07:53.798197   46765 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0621 19:07:53.798205   46765 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0621 19:07:53.798210   46765 command_runner.go:130] > default_sysctls = [
	I0621 19:07:53.798215   46765 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0621 19:07:53.798221   46765 command_runner.go:130] > ]
	I0621 19:07:53.798226   46765 command_runner.go:130] > # List of devices on the host that a
	I0621 19:07:53.798239   46765 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0621 19:07:53.798249   46765 command_runner.go:130] > # allowed_devices = [
	I0621 19:07:53.798260   46765 command_runner.go:130] > # 	"/dev/fuse",
	I0621 19:07:53.798269   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798278   46765 command_runner.go:130] > # List of additional devices. specified as
	I0621 19:07:53.798288   46765 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0621 19:07:53.798295   46765 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0621 19:07:53.798303   46765 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0621 19:07:53.798309   46765 command_runner.go:130] > # additional_devices = [
	I0621 19:07:53.798313   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798322   46765 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0621 19:07:53.798330   46765 command_runner.go:130] > # cdi_spec_dirs = [
	I0621 19:07:53.798340   46765 command_runner.go:130] > # 	"/etc/cdi",
	I0621 19:07:53.798350   46765 command_runner.go:130] > # 	"/var/run/cdi",
	I0621 19:07:53.798355   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798369   46765 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0621 19:07:53.798382   46765 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0621 19:07:53.798391   46765 command_runner.go:130] > # Defaults to false.
	I0621 19:07:53.798398   46765 command_runner.go:130] > # device_ownership_from_security_context = false
	I0621 19:07:53.798407   46765 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0621 19:07:53.798415   46765 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0621 19:07:53.798421   46765 command_runner.go:130] > # hooks_dir = [
	I0621 19:07:53.798426   46765 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0621 19:07:53.798432   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798442   46765 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0621 19:07:53.798456   46765 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0621 19:07:53.798468   46765 command_runner.go:130] > # its default mounts from the following two files:
	I0621 19:07:53.798476   46765 command_runner.go:130] > #
	I0621 19:07:53.798506   46765 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0621 19:07:53.798517   46765 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0621 19:07:53.798522   46765 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0621 19:07:53.798528   46765 command_runner.go:130] > #
	I0621 19:07:53.798535   46765 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0621 19:07:53.798549   46765 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0621 19:07:53.798563   46765 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0621 19:07:53.798575   46765 command_runner.go:130] > #      only add mounts it finds in this file.
	I0621 19:07:53.798583   46765 command_runner.go:130] > #
	I0621 19:07:53.798590   46765 command_runner.go:130] > # default_mounts_file = ""
	I0621 19:07:53.798602   46765 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0621 19:07:53.798616   46765 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0621 19:07:53.798630   46765 command_runner.go:130] > pids_limit = 1024
	I0621 19:07:53.798640   46765 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0621 19:07:53.798653   46765 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0621 19:07:53.798667   46765 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0621 19:07:53.798684   46765 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0621 19:07:53.798694   46765 command_runner.go:130] > # log_size_max = -1
	I0621 19:07:53.798708   46765 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0621 19:07:53.798717   46765 command_runner.go:130] > # log_to_journald = false
	I0621 19:07:53.798730   46765 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0621 19:07:53.798738   46765 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0621 19:07:53.798747   46765 command_runner.go:130] > # Path to directory for container attach sockets.
	I0621 19:07:53.798759   46765 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0621 19:07:53.798771   46765 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0621 19:07:53.798782   46765 command_runner.go:130] > # bind_mount_prefix = ""
	I0621 19:07:53.798794   46765 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0621 19:07:53.798804   46765 command_runner.go:130] > # read_only = false
	I0621 19:07:53.798817   46765 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0621 19:07:53.798829   46765 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0621 19:07:53.798837   46765 command_runner.go:130] > # live configuration reload.
	I0621 19:07:53.798844   46765 command_runner.go:130] > # log_level = "info"
	I0621 19:07:53.798853   46765 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0621 19:07:53.798865   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.798872   46765 command_runner.go:130] > # log_filter = ""
	I0621 19:07:53.798885   46765 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0621 19:07:53.798899   46765 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0621 19:07:53.798909   46765 command_runner.go:130] > # separated by comma.
	I0621 19:07:53.798923   46765 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0621 19:07:53.798931   46765 command_runner.go:130] > # uid_mappings = ""
	I0621 19:07:53.798937   46765 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0621 19:07:53.798949   46765 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0621 19:07:53.798958   46765 command_runner.go:130] > # separated by comma.
	I0621 19:07:53.798971   46765 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0621 19:07:53.798981   46765 command_runner.go:130] > # gid_mappings = ""
	I0621 19:07:53.798993   46765 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0621 19:07:53.799006   46765 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0621 19:07:53.799019   46765 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0621 19:07:53.799030   46765 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0621 19:07:53.799038   46765 command_runner.go:130] > # minimum_mappable_uid = -1
	I0621 19:07:53.799050   46765 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0621 19:07:53.799063   46765 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0621 19:07:53.799075   46765 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0621 19:07:53.799090   46765 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0621 19:07:53.799100   46765 command_runner.go:130] > # minimum_mappable_gid = -1
	I0621 19:07:53.799112   46765 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0621 19:07:53.799126   46765 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0621 19:07:53.799134   46765 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0621 19:07:53.799142   46765 command_runner.go:130] > # ctr_stop_timeout = 30
	I0621 19:07:53.799152   46765 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0621 19:07:53.799169   46765 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0621 19:07:53.799179   46765 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0621 19:07:53.799190   46765 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0621 19:07:53.799200   46765 command_runner.go:130] > drop_infra_ctr = false
	I0621 19:07:53.799212   46765 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0621 19:07:53.799224   46765 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0621 19:07:53.799235   46765 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0621 19:07:53.799240   46765 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0621 19:07:53.799255   46765 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0621 19:07:53.799268   46765 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0621 19:07:53.799277   46765 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0621 19:07:53.799288   46765 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0621 19:07:53.799299   46765 command_runner.go:130] > # shared_cpuset = ""
	I0621 19:07:53.799308   46765 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0621 19:07:53.799316   46765 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0621 19:07:53.799321   46765 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0621 19:07:53.799333   46765 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0621 19:07:53.799343   46765 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0621 19:07:53.799356   46765 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0621 19:07:53.799370   46765 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0621 19:07:53.799376   46765 command_runner.go:130] > # enable_criu_support = false
	I0621 19:07:53.799382   46765 command_runner.go:130] > # Enable/disable the generation of the container,
	I0621 19:07:53.799391   46765 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0621 19:07:53.799398   46765 command_runner.go:130] > # enable_pod_events = false
	I0621 19:07:53.799408   46765 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0621 19:07:53.799421   46765 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0621 19:07:53.799432   46765 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0621 19:07:53.799441   46765 command_runner.go:130] > # default_runtime = "runc"
	I0621 19:07:53.799451   46765 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0621 19:07:53.799463   46765 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0621 19:07:53.799479   46765 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0621 19:07:53.799489   46765 command_runner.go:130] > # creation as a file is not desired either.
	I0621 19:07:53.799518   46765 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0621 19:07:53.799533   46765 command_runner.go:130] > # the hostname is being managed dynamically.
	I0621 19:07:53.799543   46765 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0621 19:07:53.799551   46765 command_runner.go:130] > # ]
	I0621 19:07:53.799562   46765 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0621 19:07:53.799576   46765 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0621 19:07:53.799588   46765 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0621 19:07:53.799599   46765 command_runner.go:130] > # Each entry in the table should follow the format:
	I0621 19:07:53.799606   46765 command_runner.go:130] > #
	I0621 19:07:53.799613   46765 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0621 19:07:53.799618   46765 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0621 19:07:53.799652   46765 command_runner.go:130] > # runtime_type = "oci"
	I0621 19:07:53.799659   46765 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0621 19:07:53.799663   46765 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0621 19:07:53.799670   46765 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0621 19:07:53.799675   46765 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0621 19:07:53.799681   46765 command_runner.go:130] > # monitor_env = []
	I0621 19:07:53.799686   46765 command_runner.go:130] > # privileged_without_host_devices = false
	I0621 19:07:53.799692   46765 command_runner.go:130] > # allowed_annotations = []
	I0621 19:07:53.799699   46765 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0621 19:07:53.799704   46765 command_runner.go:130] > # Where:
	I0621 19:07:53.799709   46765 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0621 19:07:53.799717   46765 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0621 19:07:53.799726   46765 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0621 19:07:53.799732   46765 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0621 19:07:53.799738   46765 command_runner.go:130] > #   in $PATH.
	I0621 19:07:53.799744   46765 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0621 19:07:53.799750   46765 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0621 19:07:53.799756   46765 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0621 19:07:53.799763   46765 command_runner.go:130] > #   state.
	I0621 19:07:53.799768   46765 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0621 19:07:53.799776   46765 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0621 19:07:53.799782   46765 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0621 19:07:53.799789   46765 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0621 19:07:53.799794   46765 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0621 19:07:53.799802   46765 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0621 19:07:53.799809   46765 command_runner.go:130] > #   The currently recognized values are:
	I0621 19:07:53.799815   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0621 19:07:53.799824   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0621 19:07:53.799829   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0621 19:07:53.799837   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0621 19:07:53.799846   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0621 19:07:53.799854   46765 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0621 19:07:53.799860   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0621 19:07:53.799869   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0621 19:07:53.799875   46765 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0621 19:07:53.799883   46765 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0621 19:07:53.799888   46765 command_runner.go:130] > #   deprecated option "conmon".
	I0621 19:07:53.799896   46765 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0621 19:07:53.799903   46765 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0621 19:07:53.799908   46765 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0621 19:07:53.799916   46765 command_runner.go:130] > #   should be moved to the container's cgroup
	I0621 19:07:53.799929   46765 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0621 19:07:53.799936   46765 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0621 19:07:53.799944   46765 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0621 19:07:53.799951   46765 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0621 19:07:53.799955   46765 command_runner.go:130] > #
	I0621 19:07:53.799960   46765 command_runner.go:130] > # Using the seccomp notifier feature:
	I0621 19:07:53.799963   46765 command_runner.go:130] > #
	I0621 19:07:53.799969   46765 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0621 19:07:53.799977   46765 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0621 19:07:53.799983   46765 command_runner.go:130] > #
	I0621 19:07:53.799989   46765 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0621 19:07:53.799998   46765 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0621 19:07:53.800001   46765 command_runner.go:130] > #
	I0621 19:07:53.800006   46765 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0621 19:07:53.800012   46765 command_runner.go:130] > # feature.
	I0621 19:07:53.800015   46765 command_runner.go:130] > #
	I0621 19:07:53.800020   46765 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0621 19:07:53.800026   46765 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0621 19:07:53.800034   46765 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0621 19:07:53.800042   46765 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0621 19:07:53.800047   46765 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0621 19:07:53.800053   46765 command_runner.go:130] > #
	I0621 19:07:53.800058   46765 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0621 19:07:53.800066   46765 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0621 19:07:53.800069   46765 command_runner.go:130] > #
	I0621 19:07:53.800077   46765 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0621 19:07:53.800085   46765 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0621 19:07:53.800087   46765 command_runner.go:130] > #
	I0621 19:07:53.800094   46765 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0621 19:07:53.800101   46765 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0621 19:07:53.800105   46765 command_runner.go:130] > # limitation.
	I0621 19:07:53.800112   46765 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0621 19:07:53.800116   46765 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0621 19:07:53.800122   46765 command_runner.go:130] > runtime_type = "oci"
	I0621 19:07:53.800126   46765 command_runner.go:130] > runtime_root = "/run/runc"
	I0621 19:07:53.800133   46765 command_runner.go:130] > runtime_config_path = ""
	I0621 19:07:53.800141   46765 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0621 19:07:53.800146   46765 command_runner.go:130] > monitor_cgroup = "pod"
	I0621 19:07:53.800152   46765 command_runner.go:130] > monitor_exec_cgroup = ""
	I0621 19:07:53.800167   46765 command_runner.go:130] > monitor_env = [
	I0621 19:07:53.800176   46765 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0621 19:07:53.800184   46765 command_runner.go:130] > ]
	I0621 19:07:53.800191   46765 command_runner.go:130] > privileged_without_host_devices = false
	I0621 19:07:53.800205   46765 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0621 19:07:53.800216   46765 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0621 19:07:53.800227   46765 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0621 19:07:53.800238   46765 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0621 19:07:53.800249   46765 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0621 19:07:53.800254   46765 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0621 19:07:53.800263   46765 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0621 19:07:53.800272   46765 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0621 19:07:53.800277   46765 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0621 19:07:53.800286   46765 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0621 19:07:53.800290   46765 command_runner.go:130] > # Example:
	I0621 19:07:53.800296   46765 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0621 19:07:53.800301   46765 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0621 19:07:53.800308   46765 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0621 19:07:53.800313   46765 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0621 19:07:53.800319   46765 command_runner.go:130] > # cpuset = 0
	I0621 19:07:53.800322   46765 command_runner.go:130] > # cpushares = "0-1"
	I0621 19:07:53.800328   46765 command_runner.go:130] > # Where:
	I0621 19:07:53.800332   46765 command_runner.go:130] > # The workload name is workload-type.
	I0621 19:07:53.800338   46765 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0621 19:07:53.800346   46765 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0621 19:07:53.800354   46765 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0621 19:07:53.800361   46765 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0621 19:07:53.800369   46765 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0621 19:07:53.800375   46765 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0621 19:07:53.800384   46765 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0621 19:07:53.800390   46765 command_runner.go:130] > # Default value is set to true
	I0621 19:07:53.800394   46765 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0621 19:07:53.800402   46765 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0621 19:07:53.800407   46765 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0621 19:07:53.800414   46765 command_runner.go:130] > # Default value is set to 'false'
	I0621 19:07:53.800419   46765 command_runner.go:130] > # disable_hostport_mapping = false
	I0621 19:07:53.800428   46765 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0621 19:07:53.800433   46765 command_runner.go:130] > #
	I0621 19:07:53.800439   46765 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0621 19:07:53.800447   46765 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0621 19:07:53.800455   46765 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0621 19:07:53.800461   46765 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0621 19:07:53.800466   46765 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0621 19:07:53.800469   46765 command_runner.go:130] > [crio.image]
	I0621 19:07:53.800474   46765 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0621 19:07:53.800478   46765 command_runner.go:130] > # default_transport = "docker://"
	I0621 19:07:53.800484   46765 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0621 19:07:53.800489   46765 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0621 19:07:53.800493   46765 command_runner.go:130] > # global_auth_file = ""
	I0621 19:07:53.800498   46765 command_runner.go:130] > # The image used to instantiate infra containers.
	I0621 19:07:53.800502   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.800507   46765 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0621 19:07:53.800512   46765 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0621 19:07:53.800518   46765 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0621 19:07:53.800522   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.800526   46765 command_runner.go:130] > # pause_image_auth_file = ""
	I0621 19:07:53.800531   46765 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0621 19:07:53.800536   46765 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0621 19:07:53.800541   46765 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0621 19:07:53.800546   46765 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0621 19:07:53.800550   46765 command_runner.go:130] > # pause_command = "/pause"
	I0621 19:07:53.800555   46765 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0621 19:07:53.800560   46765 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0621 19:07:53.800566   46765 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0621 19:07:53.800571   46765 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0621 19:07:53.800576   46765 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0621 19:07:53.800582   46765 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0621 19:07:53.800585   46765 command_runner.go:130] > # pinned_images = [
	I0621 19:07:53.800588   46765 command_runner.go:130] > # ]
	I0621 19:07:53.800599   46765 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0621 19:07:53.800605   46765 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0621 19:07:53.800611   46765 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0621 19:07:53.800616   46765 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0621 19:07:53.800620   46765 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0621 19:07:53.800624   46765 command_runner.go:130] > # signature_policy = ""
	I0621 19:07:53.800628   46765 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0621 19:07:53.800636   46765 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0621 19:07:53.800641   46765 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0621 19:07:53.800649   46765 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0621 19:07:53.800654   46765 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0621 19:07:53.800661   46765 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0621 19:07:53.800666   46765 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0621 19:07:53.800674   46765 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0621 19:07:53.800681   46765 command_runner.go:130] > # changing them here.
	I0621 19:07:53.800684   46765 command_runner.go:130] > # insecure_registries = [
	I0621 19:07:53.800690   46765 command_runner.go:130] > # ]
	I0621 19:07:53.800696   46765 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0621 19:07:53.800703   46765 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0621 19:07:53.800707   46765 command_runner.go:130] > # image_volumes = "mkdir"
	I0621 19:07:53.800714   46765 command_runner.go:130] > # Temporary directory to use for storing big files
	I0621 19:07:53.800717   46765 command_runner.go:130] > # big_files_temporary_dir = ""
	I0621 19:07:53.800725   46765 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0621 19:07:53.800732   46765 command_runner.go:130] > # CNI plugins.
	I0621 19:07:53.800735   46765 command_runner.go:130] > [crio.network]
	I0621 19:07:53.800741   46765 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0621 19:07:53.800748   46765 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0621 19:07:53.800752   46765 command_runner.go:130] > # cni_default_network = ""
	I0621 19:07:53.800760   46765 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0621 19:07:53.800766   46765 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0621 19:07:53.800771   46765 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0621 19:07:53.800778   46765 command_runner.go:130] > # plugin_dirs = [
	I0621 19:07:53.800781   46765 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0621 19:07:53.800787   46765 command_runner.go:130] > # ]
	I0621 19:07:53.800792   46765 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0621 19:07:53.800798   46765 command_runner.go:130] > [crio.metrics]
	I0621 19:07:53.800807   46765 command_runner.go:130] > # Globally enable or disable metrics support.
	I0621 19:07:53.800813   46765 command_runner.go:130] > enable_metrics = true
	I0621 19:07:53.800818   46765 command_runner.go:130] > # Specify enabled metrics collectors.
	I0621 19:07:53.800824   46765 command_runner.go:130] > # Per default all metrics are enabled.
	I0621 19:07:53.800830   46765 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0621 19:07:53.800838   46765 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0621 19:07:53.800846   46765 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0621 19:07:53.800849   46765 command_runner.go:130] > # metrics_collectors = [
	I0621 19:07:53.800853   46765 command_runner.go:130] > # 	"operations",
	I0621 19:07:53.800858   46765 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0621 19:07:53.800864   46765 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0621 19:07:53.800868   46765 command_runner.go:130] > # 	"operations_errors",
	I0621 19:07:53.800874   46765 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0621 19:07:53.800878   46765 command_runner.go:130] > # 	"image_pulls_by_name",
	I0621 19:07:53.800885   46765 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0621 19:07:53.800889   46765 command_runner.go:130] > # 	"image_pulls_failures",
	I0621 19:07:53.800895   46765 command_runner.go:130] > # 	"image_pulls_successes",
	I0621 19:07:53.800899   46765 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0621 19:07:53.800903   46765 command_runner.go:130] > # 	"image_layer_reuse",
	I0621 19:07:53.800910   46765 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0621 19:07:53.800918   46765 command_runner.go:130] > # 	"containers_oom_total",
	I0621 19:07:53.800922   46765 command_runner.go:130] > # 	"containers_oom",
	I0621 19:07:53.800926   46765 command_runner.go:130] > # 	"processes_defunct",
	I0621 19:07:53.800930   46765 command_runner.go:130] > # 	"operations_total",
	I0621 19:07:53.800935   46765 command_runner.go:130] > # 	"operations_latency_seconds",
	I0621 19:07:53.800939   46765 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0621 19:07:53.800945   46765 command_runner.go:130] > # 	"operations_errors_total",
	I0621 19:07:53.800949   46765 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0621 19:07:53.800956   46765 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0621 19:07:53.800960   46765 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0621 19:07:53.800967   46765 command_runner.go:130] > # 	"image_pulls_success_total",
	I0621 19:07:53.800971   46765 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0621 19:07:53.800978   46765 command_runner.go:130] > # 	"containers_oom_count_total",
	I0621 19:07:53.800982   46765 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0621 19:07:53.800986   46765 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0621 19:07:53.800992   46765 command_runner.go:130] > # ]
	I0621 19:07:53.800998   46765 command_runner.go:130] > # The port on which the metrics server will listen.
	I0621 19:07:53.801004   46765 command_runner.go:130] > # metrics_port = 9090
	I0621 19:07:53.801018   46765 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0621 19:07:53.801022   46765 command_runner.go:130] > # metrics_socket = ""
	I0621 19:07:53.801027   46765 command_runner.go:130] > # The certificate for the secure metrics server.
	I0621 19:07:53.801033   46765 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0621 19:07:53.801041   46765 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0621 19:07:53.801047   46765 command_runner.go:130] > # certificate on any modification event.
	I0621 19:07:53.801051   46765 command_runner.go:130] > # metrics_cert = ""
	I0621 19:07:53.801058   46765 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0621 19:07:53.801063   46765 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0621 19:07:53.801067   46765 command_runner.go:130] > # metrics_key = ""
	I0621 19:07:53.801072   46765 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0621 19:07:53.801078   46765 command_runner.go:130] > [crio.tracing]
	I0621 19:07:53.801084   46765 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0621 19:07:53.801090   46765 command_runner.go:130] > # enable_tracing = false
	I0621 19:07:53.801095   46765 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0621 19:07:53.801101   46765 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0621 19:07:53.801107   46765 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0621 19:07:53.801114   46765 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0621 19:07:53.801118   46765 command_runner.go:130] > # CRI-O NRI configuration.
	I0621 19:07:53.801123   46765 command_runner.go:130] > [crio.nri]
	I0621 19:07:53.801127   46765 command_runner.go:130] > # Globally enable or disable NRI.
	I0621 19:07:53.801135   46765 command_runner.go:130] > # enable_nri = false
	I0621 19:07:53.801141   46765 command_runner.go:130] > # NRI socket to listen on.
	I0621 19:07:53.801151   46765 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0621 19:07:53.801164   46765 command_runner.go:130] > # NRI plugin directory to use.
	I0621 19:07:53.801175   46765 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0621 19:07:53.801182   46765 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0621 19:07:53.801193   46765 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0621 19:07:53.801201   46765 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0621 19:07:53.801210   46765 command_runner.go:130] > # nri_disable_connections = false
	I0621 19:07:53.801216   46765 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0621 19:07:53.801223   46765 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0621 19:07:53.801228   46765 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0621 19:07:53.801235   46765 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0621 19:07:53.801246   46765 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0621 19:07:53.801252   46765 command_runner.go:130] > [crio.stats]
	I0621 19:07:53.801260   46765 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0621 19:07:53.801267   46765 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0621 19:07:53.801272   46765 command_runner.go:130] > # stats_collection_period = 0
	I0621 19:07:53.801305   46765 command_runner.go:130] ! time="2024-06-21 19:07:53.760785412Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0621 19:07:53.801318   46765 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0621 19:07:53.801406   46765 cni.go:84] Creating CNI manager for ""
	I0621 19:07:53.801416   46765 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 19:07:53.801423   46765 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 19:07:53.801442   46765 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-851952 NodeName:multinode-851952 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 19:07:53.801568   46765 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-851952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 19:07:53.801620   46765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 19:07:53.811659   46765 command_runner.go:130] > kubeadm
	I0621 19:07:53.811685   46765 command_runner.go:130] > kubectl
	I0621 19:07:53.811692   46765 command_runner.go:130] > kubelet
	I0621 19:07:53.811732   46765 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 19:07:53.811786   46765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0621 19:07:53.821183   46765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0621 19:07:53.837267   46765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 19:07:53.852812   46765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0621 19:07:53.868545   46765 ssh_runner.go:195] Run: grep 192.168.39.146	control-plane.minikube.internal$ /etc/hosts
	I0621 19:07:53.872025   46765 command_runner.go:130] > 192.168.39.146	control-plane.minikube.internal
	I0621 19:07:53.872171   46765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:07:54.017064   46765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 19:07:54.032446   46765 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952 for IP: 192.168.39.146
	I0621 19:07:54.032470   46765 certs.go:194] generating shared ca certs ...
	I0621 19:07:54.032493   46765 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:07:54.032680   46765 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 19:07:54.032738   46765 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 19:07:54.032753   46765 certs.go:256] generating profile certs ...
	I0621 19:07:54.032864   46765 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/client.key
	I0621 19:07:54.032974   46765 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.key.d197130b
	I0621 19:07:54.033031   46765 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.key
	I0621 19:07:54.033047   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 19:07:54.033070   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 19:07:54.033092   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 19:07:54.033112   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 19:07:54.033133   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 19:07:54.033152   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 19:07:54.033175   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 19:07:54.033191   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 19:07:54.033259   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 19:07:54.033304   46765 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 19:07:54.033319   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 19:07:54.033357   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 19:07:54.033395   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 19:07:54.033431   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 19:07:54.033489   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 19:07:54.033540   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.033563   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.033583   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.034406   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 19:07:54.058066   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 19:07:54.080170   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 19:07:54.102474   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 19:07:54.124098   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 19:07:54.146047   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 19:07:54.168689   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 19:07:54.191456   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0621 19:07:54.216292   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 19:07:54.239770   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 19:07:54.261111   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 19:07:54.282610   46765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 19:07:54.297659   46765 ssh_runner.go:195] Run: openssl version
	I0621 19:07:54.303098   46765 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0621 19:07:54.303183   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 19:07:54.313948   46765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.317922   46765 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.317950   46765 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.317986   46765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.322958   46765 command_runner.go:130] > 51391683
	I0621 19:07:54.323123   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 19:07:54.332603   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 19:07:54.344115   46765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.348293   46765 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.348314   46765 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.348349   46765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.353448   46765 command_runner.go:130] > 3ec20f2e
	I0621 19:07:54.353573   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 19:07:54.362495   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 19:07:54.372987   46765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.377014   46765 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.377169   46765 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.377207   46765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.382227   46765 command_runner.go:130] > b5213941
	I0621 19:07:54.382388   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 19:07:54.391940   46765 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 19:07:54.396528   46765 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 19:07:54.396552   46765 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0621 19:07:54.396557   46765 command_runner.go:130] > Device: 253,1	Inode: 6292501     Links: 1
	I0621 19:07:54.396563   46765 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0621 19:07:54.396568   46765 command_runner.go:130] > Access: 2024-06-21 19:01:50.569403511 +0000
	I0621 19:07:54.396573   46765 command_runner.go:130] > Modify: 2024-06-21 19:01:50.569403511 +0000
	I0621 19:07:54.396578   46765 command_runner.go:130] > Change: 2024-06-21 19:01:50.569403511 +0000
	I0621 19:07:54.396583   46765 command_runner.go:130] >  Birth: 2024-06-21 19:01:50.569403511 +0000
	I0621 19:07:54.396655   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0621 19:07:54.402030   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.402236   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0621 19:07:54.407455   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.407673   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0621 19:07:54.413128   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.413303   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0621 19:07:54.418723   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.418786   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0621 19:07:54.423928   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.424138   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0621 19:07:54.429083   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.429231   46765 kubeadm.go:391] StartCluster: {Name:multinode-851952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-851952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:07:54.429380   46765 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 19:07:54.429429   46765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 19:07:54.465678   46765 command_runner.go:130] > d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497
	I0621 19:07:54.465708   46765 command_runner.go:130] > 55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359
	I0621 19:07:54.465718   46765 command_runner.go:130] > 36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83
	I0621 19:07:54.465729   46765 command_runner.go:130] > 9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2
	I0621 19:07:54.465737   46765 command_runner.go:130] > 02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c
	I0621 19:07:54.465746   46765 command_runner.go:130] > 736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c
	I0621 19:07:54.465755   46765 command_runner.go:130] > 77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1
	I0621 19:07:54.465770   46765 command_runner.go:130] > 40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd
	I0621 19:07:54.465806   46765 cri.go:89] found id: "d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497"
	I0621 19:07:54.465818   46765 cri.go:89] found id: "55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359"
	I0621 19:07:54.465824   46765 cri.go:89] found id: "36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83"
	I0621 19:07:54.465828   46765 cri.go:89] found id: "9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2"
	I0621 19:07:54.465832   46765 cri.go:89] found id: "02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c"
	I0621 19:07:54.465837   46765 cri.go:89] found id: "736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c"
	I0621 19:07:54.465841   46765 cri.go:89] found id: "77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1"
	I0621 19:07:54.465845   46765 cri.go:89] found id: "40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd"
	I0621 19:07:54.465849   46765 cri.go:89] found id: ""
	I0621 19:07:54.465892   46765 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.948568219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996956948545329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41351d0e-d729-4d40-9676-d3383ba16be8 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.949328868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=626c5800-f0b5-4351-b668-3bd4f3a033be name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.949386585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=626c5800-f0b5-4351-b668-3bd4f3a033be name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.949784485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89439fcc1faf71b39c69b6a49edcbc1b6ef6fea006f079a6e358e1f90c3fecc2,PodSandboxId:11ffe81acbb509d6e0065ceda3e866ebfbe28073ff1690800bacf6fb1bf8fd2b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718996915220929518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173,PodSandboxId:8de73709691a3b536d412aa59c89afea9748eebfdd169c7ff833decf6dfedd92,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718996881829444853,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a,PodSandboxId:de3f9b7d54bb3b4b481c96e19a9dae56796e2caa60345f32ed8b757156b1c514,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718996881629457186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302,PodSandboxId:e1a72fca3965a39e75a5f23dddc1a4baf47d16ef093ca24ff1c7674fb943b0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718996881547572635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-88efe3296374,},Annotations:map[string]
string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedc6977a27755e7fca1a63c1a9d00f1f0a54d82eb2a3187c77142615620d46c,PodSandboxId:57df3e36569309ca23c946b1b7dc2e5d36bd036346295db69c2c58dc58f8dbd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718996881487065756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.ku
bernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2,PodSandboxId:b0b8ca34537094fd7a9f711b801ac3c6686630582e7ccc9c42d2abbaccd297fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718996876686496232,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e,PodSandboxId:64993081fa8fff04f4f1dbcce496c8024a02ae07915a4d8d8f7d952613b684e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718996876705109488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2929e396,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d,PodSandboxId:0154e8f660b7e7d416a8a8ed92578b387760a78e869e971dc4c01acd5d7797bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718996876697284640,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49,PodSandboxId:720bfedbf7fc66572645bef4a5387a6ebfd7f71fd44b76a818b5b724fa9ea1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718996876596211428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.container.hash: a2b1940a,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e796e77879a17462fdc8d1e3c5bdb29549cdfd9e2f6e289a21a6e43b02a4d331,PodSandboxId:ea35aa521b03b7fec8d5fc6be4a34df88045d1d48b103149e23be06f072d7307,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718996582296495624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497,PodSandboxId:4fb33dfce476ed3823e8cca1f72ed14304faa62f3b41d1dfa1ab27273fe35ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718996534883309651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359,PodSandboxId:85cb6ca1d24caad46359b6da0ba5d7fef334a953ca31936ea5facd138aa034f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718996534811058203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.kubernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83,PodSandboxId:37894f95939c33d031fb7adf9cda5a47b3dc82a9dbfd34fe898761937ab04af4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718996533343332351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2,PodSandboxId:e1ed586db133d068777a4a215969814542284b04d8298438220678fba936ea1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718996532465193333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-
88efe3296374,},Annotations:map[string]string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c,PodSandboxId:f5f532e3c35f66380c8143c9e540c938aeeba0dd60a49015303fd6952fa2dc57,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718996513757373692,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{
io.kubernetes.container.hash: 2929e396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c,PodSandboxId:8f18790ab0368780fe3ac2954123025233266fb448a70fc0a4179487baaa7a70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718996513731665505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1,PodSandboxId:7c79852f0ef58d2fd5cddb43f247fbd33f807747284ae3b9a450f82832050f49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718996513719491108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd,PodSandboxId:d7d511623babc445d61565e6e4603b379b5ec9e9dae0a1cf899e328e6b73c2ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718996513652670101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a2b1940a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=626c5800-f0b5-4351-b668-3bd4f3a033be name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.987115952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5e6a05b-09fd-423d-a72e-cd2d2d4f59fb name=/runtime.v1.RuntimeService/Version
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.987235002Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5e6a05b-09fd-423d-a72e-cd2d2d4f59fb name=/runtime.v1.RuntimeService/Version
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.988469389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45193a59-e9dc-4894-b60a-3efe3237e7b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.988836895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996956988815981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45193a59-e9dc-4894-b60a-3efe3237e7b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.989642690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=200ada6b-029e-42b5-a10d-b20fce624d40 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.989700863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=200ada6b-029e-42b5-a10d-b20fce624d40 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:16 multinode-851952 crio[2806]: time="2024-06-21 19:09:16.990332980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89439fcc1faf71b39c69b6a49edcbc1b6ef6fea006f079a6e358e1f90c3fecc2,PodSandboxId:11ffe81acbb509d6e0065ceda3e866ebfbe28073ff1690800bacf6fb1bf8fd2b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718996915220929518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173,PodSandboxId:8de73709691a3b536d412aa59c89afea9748eebfdd169c7ff833decf6dfedd92,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718996881829444853,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a,PodSandboxId:de3f9b7d54bb3b4b481c96e19a9dae56796e2caa60345f32ed8b757156b1c514,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718996881629457186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302,PodSandboxId:e1a72fca3965a39e75a5f23dddc1a4baf47d16ef093ca24ff1c7674fb943b0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718996881547572635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-88efe3296374,},Annotations:map[string]
string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedc6977a27755e7fca1a63c1a9d00f1f0a54d82eb2a3187c77142615620d46c,PodSandboxId:57df3e36569309ca23c946b1b7dc2e5d36bd036346295db69c2c58dc58f8dbd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718996881487065756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.ku
bernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2,PodSandboxId:b0b8ca34537094fd7a9f711b801ac3c6686630582e7ccc9c42d2abbaccd297fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718996876686496232,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e,PodSandboxId:64993081fa8fff04f4f1dbcce496c8024a02ae07915a4d8d8f7d952613b684e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718996876705109488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2929e396,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d,PodSandboxId:0154e8f660b7e7d416a8a8ed92578b387760a78e869e971dc4c01acd5d7797bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718996876697284640,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49,PodSandboxId:720bfedbf7fc66572645bef4a5387a6ebfd7f71fd44b76a818b5b724fa9ea1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718996876596211428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.container.hash: a2b1940a,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e796e77879a17462fdc8d1e3c5bdb29549cdfd9e2f6e289a21a6e43b02a4d331,PodSandboxId:ea35aa521b03b7fec8d5fc6be4a34df88045d1d48b103149e23be06f072d7307,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718996582296495624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497,PodSandboxId:4fb33dfce476ed3823e8cca1f72ed14304faa62f3b41d1dfa1ab27273fe35ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718996534883309651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359,PodSandboxId:85cb6ca1d24caad46359b6da0ba5d7fef334a953ca31936ea5facd138aa034f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718996534811058203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.kubernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83,PodSandboxId:37894f95939c33d031fb7adf9cda5a47b3dc82a9dbfd34fe898761937ab04af4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718996533343332351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2,PodSandboxId:e1ed586db133d068777a4a215969814542284b04d8298438220678fba936ea1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718996532465193333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-
88efe3296374,},Annotations:map[string]string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c,PodSandboxId:f5f532e3c35f66380c8143c9e540c938aeeba0dd60a49015303fd6952fa2dc57,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718996513757373692,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{
io.kubernetes.container.hash: 2929e396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c,PodSandboxId:8f18790ab0368780fe3ac2954123025233266fb448a70fc0a4179487baaa7a70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718996513731665505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1,PodSandboxId:7c79852f0ef58d2fd5cddb43f247fbd33f807747284ae3b9a450f82832050f49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718996513719491108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd,PodSandboxId:d7d511623babc445d61565e6e4603b379b5ec9e9dae0a1cf899e328e6b73c2ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718996513652670101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a2b1940a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=200ada6b-029e-42b5-a10d-b20fce624d40 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.030862154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7344960f-b514-4174-87ae-009baabdb876 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.030949790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7344960f-b514-4174-87ae-009baabdb876 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.031834138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb96aea7-9c3d-465d-8fe2-4131b6fa1fa7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.032415395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996957032391245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb96aea7-9c3d-465d-8fe2-4131b6fa1fa7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.032958237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3d2fcf7-94e6-441c-a0d2-a6ac7cd928bb name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.033013299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3d2fcf7-94e6-441c-a0d2-a6ac7cd928bb name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.033498777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89439fcc1faf71b39c69b6a49edcbc1b6ef6fea006f079a6e358e1f90c3fecc2,PodSandboxId:11ffe81acbb509d6e0065ceda3e866ebfbe28073ff1690800bacf6fb1bf8fd2b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718996915220929518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173,PodSandboxId:8de73709691a3b536d412aa59c89afea9748eebfdd169c7ff833decf6dfedd92,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718996881829444853,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a,PodSandboxId:de3f9b7d54bb3b4b481c96e19a9dae56796e2caa60345f32ed8b757156b1c514,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718996881629457186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302,PodSandboxId:e1a72fca3965a39e75a5f23dddc1a4baf47d16ef093ca24ff1c7674fb943b0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718996881547572635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-88efe3296374,},Annotations:map[string]
string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedc6977a27755e7fca1a63c1a9d00f1f0a54d82eb2a3187c77142615620d46c,PodSandboxId:57df3e36569309ca23c946b1b7dc2e5d36bd036346295db69c2c58dc58f8dbd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718996881487065756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.ku
bernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2,PodSandboxId:b0b8ca34537094fd7a9f711b801ac3c6686630582e7ccc9c42d2abbaccd297fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718996876686496232,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e,PodSandboxId:64993081fa8fff04f4f1dbcce496c8024a02ae07915a4d8d8f7d952613b684e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718996876705109488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2929e396,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d,PodSandboxId:0154e8f660b7e7d416a8a8ed92578b387760a78e869e971dc4c01acd5d7797bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718996876697284640,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49,PodSandboxId:720bfedbf7fc66572645bef4a5387a6ebfd7f71fd44b76a818b5b724fa9ea1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718996876596211428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.container.hash: a2b1940a,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e796e77879a17462fdc8d1e3c5bdb29549cdfd9e2f6e289a21a6e43b02a4d331,PodSandboxId:ea35aa521b03b7fec8d5fc6be4a34df88045d1d48b103149e23be06f072d7307,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718996582296495624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497,PodSandboxId:4fb33dfce476ed3823e8cca1f72ed14304faa62f3b41d1dfa1ab27273fe35ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718996534883309651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359,PodSandboxId:85cb6ca1d24caad46359b6da0ba5d7fef334a953ca31936ea5facd138aa034f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718996534811058203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.kubernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83,PodSandboxId:37894f95939c33d031fb7adf9cda5a47b3dc82a9dbfd34fe898761937ab04af4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718996533343332351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2,PodSandboxId:e1ed586db133d068777a4a215969814542284b04d8298438220678fba936ea1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718996532465193333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-
88efe3296374,},Annotations:map[string]string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c,PodSandboxId:f5f532e3c35f66380c8143c9e540c938aeeba0dd60a49015303fd6952fa2dc57,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718996513757373692,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{
io.kubernetes.container.hash: 2929e396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c,PodSandboxId:8f18790ab0368780fe3ac2954123025233266fb448a70fc0a4179487baaa7a70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718996513731665505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1,PodSandboxId:7c79852f0ef58d2fd5cddb43f247fbd33f807747284ae3b9a450f82832050f49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718996513719491108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd,PodSandboxId:d7d511623babc445d61565e6e4603b379b5ec9e9dae0a1cf899e328e6b73c2ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718996513652670101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a2b1940a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3d2fcf7-94e6-441c-a0d2-a6ac7cd928bb name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.071933588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d04bc62-28ca-4e51-9d02-a98f242a3a91 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.072122216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d04bc62-28ca-4e51-9d02-a98f242a3a91 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.073055671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60f37d55-629d-4cfa-8ad5-98bbc300ffc4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.073685206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718996957073656306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60f37d55-629d-4cfa-8ad5-98bbc300ffc4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.074243658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c191caf7-fa3b-4e9b-a057-ae84dc0b6fe4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.074306809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c191caf7-fa3b-4e9b-a057-ae84dc0b6fe4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:09:17 multinode-851952 crio[2806]: time="2024-06-21 19:09:17.074782877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89439fcc1faf71b39c69b6a49edcbc1b6ef6fea006f079a6e358e1f90c3fecc2,PodSandboxId:11ffe81acbb509d6e0065ceda3e866ebfbe28073ff1690800bacf6fb1bf8fd2b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718996915220929518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173,PodSandboxId:8de73709691a3b536d412aa59c89afea9748eebfdd169c7ff833decf6dfedd92,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718996881829444853,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a,PodSandboxId:de3f9b7d54bb3b4b481c96e19a9dae56796e2caa60345f32ed8b757156b1c514,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718996881629457186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302,PodSandboxId:e1a72fca3965a39e75a5f23dddc1a4baf47d16ef093ca24ff1c7674fb943b0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718996881547572635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-88efe3296374,},Annotations:map[string]
string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedc6977a27755e7fca1a63c1a9d00f1f0a54d82eb2a3187c77142615620d46c,PodSandboxId:57df3e36569309ca23c946b1b7dc2e5d36bd036346295db69c2c58dc58f8dbd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718996881487065756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.ku
bernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2,PodSandboxId:b0b8ca34537094fd7a9f711b801ac3c6686630582e7ccc9c42d2abbaccd297fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718996876686496232,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e,PodSandboxId:64993081fa8fff04f4f1dbcce496c8024a02ae07915a4d8d8f7d952613b684e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718996876705109488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2929e396,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d,PodSandboxId:0154e8f660b7e7d416a8a8ed92578b387760a78e869e971dc4c01acd5d7797bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718996876697284640,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49,PodSandboxId:720bfedbf7fc66572645bef4a5387a6ebfd7f71fd44b76a818b5b724fa9ea1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718996876596211428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.container.hash: a2b1940a,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e796e77879a17462fdc8d1e3c5bdb29549cdfd9e2f6e289a21a6e43b02a4d331,PodSandboxId:ea35aa521b03b7fec8d5fc6be4a34df88045d1d48b103149e23be06f072d7307,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718996582296495624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497,PodSandboxId:4fb33dfce476ed3823e8cca1f72ed14304faa62f3b41d1dfa1ab27273fe35ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718996534883309651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359,PodSandboxId:85cb6ca1d24caad46359b6da0ba5d7fef334a953ca31936ea5facd138aa034f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718996534811058203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.kubernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83,PodSandboxId:37894f95939c33d031fb7adf9cda5a47b3dc82a9dbfd34fe898761937ab04af4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718996533343332351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2,PodSandboxId:e1ed586db133d068777a4a215969814542284b04d8298438220678fba936ea1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718996532465193333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-
88efe3296374,},Annotations:map[string]string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c,PodSandboxId:f5f532e3c35f66380c8143c9e540c938aeeba0dd60a49015303fd6952fa2dc57,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718996513757373692,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{
io.kubernetes.container.hash: 2929e396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c,PodSandboxId:8f18790ab0368780fe3ac2954123025233266fb448a70fc0a4179487baaa7a70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718996513731665505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1,PodSandboxId:7c79852f0ef58d2fd5cddb43f247fbd33f807747284ae3b9a450f82832050f49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718996513719491108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd,PodSandboxId:d7d511623babc445d61565e6e4603b379b5ec9e9dae0a1cf899e328e6b73c2ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718996513652670101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a2b1940a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c191caf7-fa3b-4e9b-a057-ae84dc0b6fe4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	89439fcc1faf7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      41 seconds ago       Running             busybox                   1                   11ffe81acbb50       busybox-fc5497c4f-rwq2d
	e6c4b975ffa0b       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   8de73709691a3       kindnet-mrcqf
	0d83a92ace1ce       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   de3f9b7d54bb3       coredns-7db6d8ff4d-hfwfj
	c77c4e18ef1f9       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      About a minute ago   Running             kube-proxy                1                   e1a72fca3965a       kube-proxy-lcgp6
	bedc6977a2775       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   57df3e3656930       storage-provisioner
	5542071560d99       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   64993081fa8ff       etcd-multinode-851952
	19e6c1b76c674       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   1                   0154e8f660b7e       kube-controller-manager-multinode-851952
	1932382e2a018       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      About a minute ago   Running             kube-scheduler            1                   b0b8ca3453709       kube-scheduler-multinode-851952
	48fda169ce764       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            1                   720bfedbf7fc6       kube-apiserver-multinode-851952
	e796e77879a17       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   ea35aa521b03b       busybox-fc5497c4f-rwq2d
	d4fd10189beef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   4fb33dfce476e       coredns-7db6d8ff4d-hfwfj
	55c61aaf731d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   85cb6ca1d24ca       storage-provisioner
	36ce441ec2d19       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      7 minutes ago        Exited              kindnet-cni               0                   37894f95939c3       kindnet-mrcqf
	9da10767b93f9       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      7 minutes ago        Exited              kube-proxy                0                   e1ed586db133d       kube-proxy-lcgp6
	02bcd841d722f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   f5f532e3c35f6       etcd-multinode-851952
	736b6d5218441       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      7 minutes ago        Exited              kube-controller-manager   0                   8f18790ab0368       kube-controller-manager-multinode-851952
	77ba488fac51d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      7 minutes ago        Exited              kube-scheduler            0                   7c79852f0ef58       kube-scheduler-multinode-851952
	40087081e25d8       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      7 minutes ago        Exited              kube-apiserver            0                   d7d511623babc       kube-apiserver-multinode-851952
	
	
	==> coredns [0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57474 - 33214 "HINFO IN 3843566766519598785.8947686938218715761. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020281459s
	
	
	==> coredns [d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497] <==
	[INFO] 10.244.0.3:38027 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001894799s
	[INFO] 10.244.0.3:38366 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093222s
	[INFO] 10.244.0.3:35759 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086773s
	[INFO] 10.244.0.3:58948 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001232911s
	[INFO] 10.244.0.3:55070 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081996s
	[INFO] 10.244.0.3:48492 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060519s
	[INFO] 10.244.0.3:40108 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064581s
	[INFO] 10.244.1.2:51451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108775s
	[INFO] 10.244.1.2:43578 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102936s
	[INFO] 10.244.1.2:37621 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076841s
	[INFO] 10.244.1.2:33016 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067514s
	[INFO] 10.244.0.3:38865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142526s
	[INFO] 10.244.0.3:37222 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084343s
	[INFO] 10.244.0.3:36593 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042012s
	[INFO] 10.244.0.3:46334 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065568s
	[INFO] 10.244.1.2:35833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134489s
	[INFO] 10.244.1.2:43015 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215657s
	[INFO] 10.244.1.2:51209 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116353s
	[INFO] 10.244.1.2:43487 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011957s
	[INFO] 10.244.0.3:60990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090172s
	[INFO] 10.244.0.3:47397 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080504s
	[INFO] 10.244.0.3:57033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105755s
	[INFO] 10.244.0.3:39863 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070608s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-851952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=multinode-851952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T19_01_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 19:01:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851952
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 19:09:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 19:08:00 +0000   Fri, 21 Jun 2024 19:01:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 19:08:00 +0000   Fri, 21 Jun 2024 19:01:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 19:08:00 +0000   Fri, 21 Jun 2024 19:01:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 19:08:00 +0000   Fri, 21 Jun 2024 19:02:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    multinode-851952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f78df2d7ac4a44e2bd7b850a69238045
	  System UUID:                f78df2d7-ac4a-44e2-bd7b-850a69238045
	  Boot ID:                    03a98d64-ee80-454b-bc41-587e302c9c98
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rwq2d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 coredns-7db6d8ff4d-hfwfj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m5s
	  kube-system                 etcd-multinode-851952                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m19s
	  kube-system                 kindnet-mrcqf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m6s
	  kube-system                 kube-apiserver-multinode-851952             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-controller-manager-multinode-851952    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-proxy-lcgp6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	  kube-system                 kube-scheduler-multinode-851952             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  Starting                 75s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m19s (x2 over 7m19s)  kubelet          Node multinode-851952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s (x2 over 7m19s)  kubelet          Node multinode-851952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s (x2 over 7m19s)  kubelet          Node multinode-851952 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m19s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m6s                   node-controller  Node multinode-851952 event: Registered Node multinode-851952 in Controller
	  Normal  NodeReady                7m3s                   kubelet          Node multinode-851952 status is now: NodeReady
	  Normal  Starting                 82s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s (x8 over 81s)      kubelet          Node multinode-851952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s (x8 over 81s)      kubelet          Node multinode-851952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s (x7 over 81s)      kubelet          Node multinode-851952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  81s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                    node-controller  Node multinode-851952 event: Registered Node multinode-851952 in Controller
	
	
	Name:               multinode-851952-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851952-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=multinode-851952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T19_08_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 19:08:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851952-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 19:09:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 19:09:09 +0000   Fri, 21 Jun 2024 19:08:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 19:09:09 +0000   Fri, 21 Jun 2024 19:08:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 19:09:09 +0000   Fri, 21 Jun 2024 19:08:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 19:09:09 +0000   Fri, 21 Jun 2024 19:08:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    multinode-851952-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8271f20fbcd24ec09b78fd28c81fb7db
	  System UUID:                8271f20f-bcd2-4ec0-9b78-fd28c81fb7db
	  Boot ID:                    4c4aeb8f-7a73-4eee-bbc5-551a745965dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6s5z7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kindnet-s78xt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m30s
	  kube-system                 kube-proxy-lsb9b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 34s                    kube-proxy       
	  Normal  Starting                 6m24s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m30s (x3 over 6m30s)  kubelet          Node multinode-851952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s (x3 over 6m30s)  kubelet          Node multinode-851952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s (x3 over 6m30s)  kubelet          Node multinode-851952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m21s                  kubelet          Node multinode-851952-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  39s (x2 over 39s)      kubelet          Node multinode-851952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x2 over 39s)      kubelet          Node multinode-851952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x2 over 39s)      kubelet          Node multinode-851952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           35s                    node-controller  Node multinode-851952-m02 event: Registered Node multinode-851952-m02 in Controller
	  Normal  NodeReady                31s                    kubelet          Node multinode-851952-m02 status is now: NodeReady
	
	
	Name:               multinode-851952-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851952-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=multinode-851952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T19_09_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 19:09:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851952-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 19:09:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 19:09:14 +0000   Fri, 21 Jun 2024 19:09:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 19:09:14 +0000   Fri, 21 Jun 2024 19:09:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 19:09:14 +0000   Fri, 21 Jun 2024 19:09:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 19:09:14 +0000   Fri, 21 Jun 2024 19:09:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    multinode-851952-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fb7b48b67004262ac09495357782c0f
	  System UUID:                0fb7b48b-6700-4262-ac09-495357782c0f
	  Boot ID:                    f9f10dea-b9e0-4fe2-934f-cf4858b025f5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2jbqx       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m48s
	  kube-system                 kube-proxy-wmc6k    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m42s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  Starting                 5m5s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  5m48s (x2 over 5m48s)  kubelet     Node multinode-851952-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m48s (x2 over 5m48s)  kubelet     Node multinode-851952-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m48s (x2 over 5m48s)  kubelet     Node multinode-851952-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m38s                  kubelet     Node multinode-851952-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m10s (x2 over 5m10s)  kubelet     Node multinode-851952-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m10s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m10s (x2 over 5m10s)  kubelet     Node multinode-851952-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m10s (x2 over 5m10s)  kubelet     Node multinode-851952-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m2s                   kubelet     Node multinode-851952-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet     Node multinode-851952-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet     Node multinode-851952-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet     Node multinode-851952-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-851952-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.067123] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.058349] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054929] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.163287] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.130066] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.258501] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.890307] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +3.444602] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.060031] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989200] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.096631] kauditd_printk_skb: 69 callbacks suppressed
	[Jun21 19:02] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.779460] systemd-fstab-generator[1452]: Ignoring "noauto" option for root device
	[ +47.264708] kauditd_printk_skb: 84 callbacks suppressed
	[Jun21 19:07] systemd-fstab-generator[2724]: Ignoring "noauto" option for root device
	[  +0.159150] systemd-fstab-generator[2736]: Ignoring "noauto" option for root device
	[  +0.172053] systemd-fstab-generator[2750]: Ignoring "noauto" option for root device
	[  +0.149964] systemd-fstab-generator[2762]: Ignoring "noauto" option for root device
	[  +0.275310] systemd-fstab-generator[2791]: Ignoring "noauto" option for root device
	[  +0.721263] systemd-fstab-generator[2891]: Ignoring "noauto" option for root device
	[  +1.830056] systemd-fstab-generator[3013]: Ignoring "noauto" option for root device
	[Jun21 19:08] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.357287] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.362773] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[ +21.028873] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c] <==
	{"level":"info","ts":"2024-06-21T19:01:56.659981Z","caller":"traceutil/trace.go:171","msg":"trace[1007002446] range","detail":"{range_begin:/registry/minions/multinode-851952; range_end:; response_count:1; response_revision:17; }","duration":"381.864173ms","start":"2024-06-21T19:01:56.278111Z","end":"2024-06-21T19:01:56.659975Z","steps":["trace[1007002446] 'agreement among raft nodes before linearized reading'  (duration: 381.822894ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:01:56.659995Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:01:56.278104Z","time spent":"381.887598ms","remote":"127.0.0.1:51826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":4153,"request content":"key:\"/registry/minions/multinode-851952\" "}
	{"level":"warn","ts":"2024-06-21T19:02:47.581962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.142106ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8751778564018602588 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-2rff2\" mod_revision:446 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-2rff2\" value_size:2301 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-2rff2\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:02:47.582066Z","caller":"traceutil/trace.go:171","msg":"trace[1161547719] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"256.169508ms","start":"2024-06-21T19:02:47.325878Z","end":"2024-06-21T19:02:47.582047Z","steps":["trace[1161547719] 'process raft request'  (duration: 103.558178ms)","trace[1161547719] 'compare'  (duration: 151.893494ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:02:53.34323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.669743ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8751778564018602677 > lease_revoke:<id:7974903c2d35022d>","response":"size:28"}
	{"level":"info","ts":"2024-06-21T19:02:53.34331Z","caller":"traceutil/trace.go:171","msg":"trace[223530444] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:506; }","duration":"176.948375ms","start":"2024-06-21T19:02:53.166351Z","end":"2024-06-21T19:02:53.3433Z","steps":["trace[223530444] 'read index received'  (duration: 48.136682ms)","trace[223530444] 'applied index is now lower than readState.Index'  (duration: 128.811057ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:02:53.34338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.011996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-06-21T19:02:53.343398Z","caller":"traceutil/trace.go:171","msg":"trace[443458560] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:485; }","duration":"177.063288ms","start":"2024-06-21T19:02:53.166326Z","end":"2024-06-21T19:02:53.34339Z","steps":["trace[443458560] 'agreement among raft nodes before linearized reading'  (duration: 176.999966ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T19:02:53.752355Z","caller":"traceutil/trace.go:171","msg":"trace[109371543] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"143.337216ms","start":"2024-06-21T19:02:53.609003Z","end":"2024-06-21T19:02:53.75234Z","steps":["trace[109371543] 'process raft request'  (duration: 143.18782ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:03:29.608496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.044752ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8751778564018602979 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-851952-m03.17db1a4efc4c2e32\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-851952-m03.17db1a4efc4c2e32\" value_size:642 lease:8751778564018602744 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:03:29.608815Z","caller":"traceutil/trace.go:171","msg":"trace[1541725632] linearizableReadLoop","detail":"{readStateIndex:601; appliedIndex:599; }","duration":"145.735042ms","start":"2024-06-21T19:03:29.463061Z","end":"2024-06-21T19:03:29.608796Z","steps":["trace[1541725632] 'read index received'  (duration: 145.139281ms)","trace[1541725632] 'applied index is now lower than readState.Index'  (duration: 595.064µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T19:03:29.608894Z","caller":"traceutil/trace.go:171","msg":"trace[153832864] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"254.112781ms","start":"2024-06-21T19:03:29.354775Z","end":"2024-06-21T19:03:29.608888Z","steps":["trace[153832864] 'process raft request'  (duration: 102.621438ms)","trace[153832864] 'compare'  (duration: 150.87439ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:03:29.60907Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.983143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-851952-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-06-21T19:03:29.609199Z","caller":"traceutil/trace.go:171","msg":"trace[1470733736] range","detail":"{range_begin:/registry/minions/multinode-851952-m03; range_end:; response_count:1; response_revision:571; }","duration":"146.089683ms","start":"2024-06-21T19:03:29.463037Z","end":"2024-06-21T19:03:29.609127Z","steps":["trace[1470733736] 'agreement among raft nodes before linearized reading'  (duration: 145.926816ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T19:03:29.609595Z","caller":"traceutil/trace.go:171","msg":"trace[1010777253] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"205.617943ms","start":"2024-06-21T19:03:29.403963Z","end":"2024-06-21T19:03:29.609581Z","steps":["trace[1010777253] 'process raft request'  (duration: 204.792384ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T19:06:21.090015Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-21T19:06:21.09017Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-851952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	{"level":"warn","ts":"2024-06-21T19:06:21.090298Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:06:21.090413Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:06:21.176089Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:06:21.17621Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-21T19:06:21.177814Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fc85001aa37e7974","current-leader-member-id":"fc85001aa37e7974"}
	{"level":"info","ts":"2024-06-21T19:06:21.180369Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-06-21T19:06:21.180494Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-06-21T19:06:21.180505Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-851952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	
	
	==> etcd [5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e] <==
	{"level":"info","ts":"2024-06-21T19:07:57.180625Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T19:07:57.180706Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T19:07:57.182655Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-21T19:07:57.193439Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fc85001aa37e7974","initial-advertise-peer-urls":["https://192.168.39.146:2380"],"listen-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.146:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-21T19:07:57.193699Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-21T19:07:57.185487Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-06-21T19:07:57.203208Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-06-21T19:07:58.811857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-21T19:07:58.811915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-21T19:07:58.811957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgPreVoteResp from fc85001aa37e7974 at term 2"}
	{"level":"info","ts":"2024-06-21T19:07:58.811971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became candidate at term 3"}
	{"level":"info","ts":"2024-06-21T19:07:58.811976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgVoteResp from fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-06-21T19:07:58.811984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became leader at term 3"}
	{"level":"info","ts":"2024-06-21T19:07:58.811995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fc85001aa37e7974 elected leader fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-06-21T19:07:58.817432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:07:58.819352Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.146:2379"}
	{"level":"info","ts":"2024-06-21T19:07:58.817389Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fc85001aa37e7974","local-member-attributes":"{Name:multinode-851952 ClientURLs:[https://192.168.39.146:2379]}","request-path":"/0/members/fc85001aa37e7974/attributes","cluster-id":"25c4f0770a3181de","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T19:07:58.820064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:07:58.821576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T19:07:58.823227Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T19:07:58.823256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T19:09:09.819463Z","caller":"traceutil/trace.go:171","msg":"trace[1701788756] linearizableReadLoop","detail":"{readStateIndex:1188; appliedIndex:1187; }","duration":"117.310687ms","start":"2024-06-21T19:09:09.702116Z","end":"2024-06-21T19:09:09.819427Z","steps":["trace[1701788756] 'read index received'  (duration: 117.195271ms)","trace[1701788756] 'applied index is now lower than readState.Index'  (duration: 112.298µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T19:09:09.819589Z","caller":"traceutil/trace.go:171","msg":"trace[1805582098] transaction","detail":"{read_only:false; response_revision:1083; number_of_response:1; }","duration":"208.970711ms","start":"2024-06-21T19:09:09.61061Z","end":"2024-06-21T19:09:09.819581Z","steps":["trace[1805582098] 'process raft request'  (duration: 208.699307ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:09:09.82001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.795254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-06-21T19:09:09.820093Z","caller":"traceutil/trace.go:171","msg":"trace[912816120] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1083; }","duration":"117.988767ms","start":"2024-06-21T19:09:09.702092Z","end":"2024-06-21T19:09:09.820081Z","steps":["trace[912816120] 'agreement among raft nodes before linearized reading'  (duration: 117.700337ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:09:17 up 7 min,  0 users,  load average: 0.45, 0.28, 0.12
	Linux multinode-851952 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83] <==
	I0621 19:05:34.289027       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:05:44.301932       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:05:44.302032       1 main.go:227] handling current node
	I0621 19:05:44.302065       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:05:44.302089       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:05:44.302274       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:05:44.302305       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:05:54.309768       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:05:54.309950       1 main.go:227] handling current node
	I0621 19:05:54.309980       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:05:54.309989       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:05:54.310226       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:05:54.310244       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:06:04.316444       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:06:04.316498       1 main.go:227] handling current node
	I0621 19:06:04.316517       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:06:04.316522       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:06:04.316663       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:06:04.316681       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:06:14.329700       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:06:14.329818       1 main.go:227] handling current node
	I0621 19:06:14.329843       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:06:14.329865       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:06:14.330002       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:06:14.330022       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173] <==
	I0621 19:08:32.633027       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:08:42.645631       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:08:42.645811       1 main.go:227] handling current node
	I0621 19:08:42.645843       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:08:42.645879       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:08:42.645996       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:08:42.646017       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:08:52.651721       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:08:52.651754       1 main.go:227] handling current node
	I0621 19:08:52.651764       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:08:52.651768       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:08:52.651858       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:08:52.651880       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:09:02.656621       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:09:02.656708       1 main.go:227] handling current node
	I0621 19:09:02.656732       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:09:02.656749       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:09:02.656864       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:09:02.656885       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:09:12.665982       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:09:12.666252       1 main.go:227] handling current node
	I0621 19:09:12.666300       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:09:12.666321       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:09:12.666449       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:09:12.666472       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd] <==
	W0621 19:06:21.112937       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.113014       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.113045       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114112       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114684       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114743       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114782       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114868       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115059       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115126       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115225       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115281       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115326       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115374       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115464       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115515       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115587       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115650       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115703       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115755       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115805       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115924       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.116062       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.116091       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.116870       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49] <==
	I0621 19:08:00.112728       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0621 19:08:00.116191       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0621 19:08:00.116268       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0621 19:08:00.116307       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0621 19:08:00.114715       1 shared_informer.go:320] Caches are synced for configmaps
	I0621 19:08:00.122726       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0621 19:08:00.122841       1 aggregator.go:165] initial CRD sync complete...
	I0621 19:08:00.122923       1 autoregister_controller.go:141] Starting autoregister controller
	I0621 19:08:00.122950       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0621 19:08:00.123015       1 cache.go:39] Caches are synced for autoregister controller
	I0621 19:08:00.123213       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0621 19:08:00.114879       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0621 19:08:00.130785       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0621 19:08:00.156579       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 19:08:00.171617       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 19:08:00.171647       1 policy_source.go:224] refreshing policies
	I0621 19:08:00.213657       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 19:08:01.029786       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 19:08:02.416606       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 19:08:02.557705       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 19:08:02.571132       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 19:08:02.662942       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 19:08:02.670869       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 19:08:12.739919       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 19:08:12.750372       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d] <==
	I0621 19:08:13.427122       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0621 19:08:34.360975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.3513ms"
	I0621 19:08:34.361093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.96µs"
	I0621 19:08:34.361369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.375µs"
	I0621 19:08:34.369842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.321963ms"
	I0621 19:08:34.370214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.597µs"
	I0621 19:08:38.807297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851952-m02\" does not exist"
	I0621 19:08:38.816564       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m02" podCIDRs=["10.244.1.0/24"]
	I0621 19:08:39.702648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.627µs"
	I0621 19:08:39.712374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.506µs"
	I0621 19:08:39.721473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.259µs"
	I0621 19:08:39.764339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.426µs"
	I0621 19:08:39.771999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.309µs"
	I0621 19:08:39.776505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.021µs"
	I0621 19:08:43.773455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.901µs"
	I0621 19:08:46.407126       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:08:46.424294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.206µs"
	I0621 19:08:46.444623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.426µs"
	I0621 19:08:50.289208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.091246ms"
	I0621 19:08:50.289330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.057µs"
	I0621 19:09:04.440009       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:09:05.388953       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:09:05.390218       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851952-m03\" does not exist"
	I0621 19:09:05.411119       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m03" podCIDRs=["10.244.2.0/24"]
	I0621 19:09:14.210867       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	
	
	==> kube-controller-manager [736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c] <==
	I0621 19:02:47.633653       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851952-m02\" does not exist"
	I0621 19:02:47.657061       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m02" podCIDRs=["10.244.1.0/24"]
	I0621 19:02:51.302256       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-851952-m02"
	I0621 19:02:56.957928       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:02:59.104740       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.542777ms"
	I0621 19:02:59.139867       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.063021ms"
	I0621 19:02:59.165808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.886072ms"
	I0621 19:02:59.166035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.929µs"
	I0621 19:03:02.553788       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.273486ms"
	I0621 19:03:02.554612       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.834µs"
	I0621 19:03:03.156457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.860829ms"
	I0621 19:03:03.156719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.15µs"
	I0621 19:03:29.612625       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851952-m03\" does not exist"
	I0621 19:03:29.612685       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:03:29.653222       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m03" podCIDRs=["10.244.2.0/24"]
	I0621 19:03:31.321026       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-851952-m03"
	I0621 19:03:39.022422       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:04:07.095940       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:04:08.041739       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:04:08.041858       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851952-m03\" does not exist"
	I0621 19:04:08.056936       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m03" podCIDRs=["10.244.3.0/24"]
	I0621 19:04:15.498130       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:05:01.370908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m03"
	I0621 19:05:01.433019       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.714885ms"
	I0621 19:05:01.433515       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.684µs"
	
	
	==> kube-proxy [9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2] <==
	I0621 19:02:12.725614       1 server_linux.go:69] "Using iptables proxy"
	I0621 19:02:12.753321       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0621 19:02:12.849454       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 19:02:12.849486       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 19:02:12.849505       1 server_linux.go:165] "Using iptables Proxier"
	I0621 19:02:12.856367       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 19:02:12.856666       1 server.go:872] "Version info" version="v1.30.2"
	I0621 19:02:12.856680       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:02:12.860220       1 config.go:192] "Starting service config controller"
	I0621 19:02:12.860236       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 19:02:12.860264       1 config.go:101] "Starting endpoint slice config controller"
	I0621 19:02:12.860267       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 19:02:12.860730       1 config.go:319] "Starting node config controller"
	I0621 19:02:12.860736       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 19:02:12.961594       1 shared_informer.go:320] Caches are synced for node config
	I0621 19:02:12.961622       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 19:02:12.961613       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302] <==
	I0621 19:08:01.824748       1 server_linux.go:69] "Using iptables proxy"
	I0621 19:08:01.854294       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0621 19:08:01.963970       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 19:08:01.964006       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 19:08:01.964023       1 server_linux.go:165] "Using iptables Proxier"
	I0621 19:08:01.969347       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 19:08:01.969592       1 server.go:872] "Version info" version="v1.30.2"
	I0621 19:08:01.969624       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:08:01.980289       1 config.go:192] "Starting service config controller"
	I0621 19:08:01.980331       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 19:08:01.980370       1 config.go:101] "Starting endpoint slice config controller"
	I0621 19:08:01.980375       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 19:08:01.981042       1 config.go:319] "Starting node config controller"
	I0621 19:08:01.981068       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 19:08:02.082252       1 shared_informer.go:320] Caches are synced for node config
	I0621 19:08:02.082286       1 shared_informer.go:320] Caches are synced for service config
	I0621 19:08:02.082338       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2] <==
	I0621 19:07:57.730107       1 serving.go:380] Generated self-signed cert in-memory
	I0621 19:08:00.136667       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0621 19:08:00.136702       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:08:00.140383       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0621 19:08:00.140596       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0621 19:08:00.140638       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0621 19:08:00.140687       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0621 19:08:00.142275       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0621 19:08:00.144260       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 19:08:00.144303       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0621 19:08:00.144310       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0621 19:08:00.240874       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0621 19:08:00.246225       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 19:08:00.246281       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1] <==
	E0621 19:01:56.057538       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0621 19:01:56.057615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0621 19:01:56.057647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0621 19:01:56.059844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 19:01:56.059929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 19:01:56.880747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0621 19:01:56.880861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0621 19:01:56.912295       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 19:01:56.912385       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 19:01:56.926259       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 19:01:56.926308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0621 19:01:57.058063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 19:01:57.058114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0621 19:01:57.124344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 19:01:57.124395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0621 19:01:57.126975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 19:01:57.127013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 19:01:57.145571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 19:01:57.145611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 19:01:57.200954       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 19:01:57.201002       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0621 19:01:57.233122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 19:01:57.233212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 19:01:58.650757       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0621 19:06:21.089254       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 21 19:07:56 multinode-851952 kubelet[3020]: E0621 19:07:56.768664    3020 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	Jun 21 19:07:57 multinode-851952 kubelet[3020]: I0621 19:07:57.473584    3020 kubelet_node_status.go:73] "Attempting to register node" node="multinode-851952"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.208568    3020 kubelet_node_status.go:112] "Node was previously registered" node="multinode-851952"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.208935    3020 kubelet_node_status.go:76] "Successfully registered node" node="multinode-851952"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.210734    3020 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.211886    3020 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.948221    3020 apiserver.go:52] "Watching apiserver"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.952260    3020 topology_manager.go:215] "Topology Admit Handler" podUID="68820bdc-6391-4f97-ab90-8d100de2f0f1" podNamespace="kube-system" podName="kindnet-mrcqf"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.952503    3020 topology_manager.go:215] "Topology Admit Handler" podUID="9727b60b-2689-4f26-9276-88efe3296374" podNamespace="kube-system" podName="kube-proxy-lcgp6"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.952619    3020 topology_manager.go:215] "Topology Admit Handler" podUID="3abeb3c8-683d-4272-ae28-0193331f528d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hfwfj"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.952741    3020 topology_manager.go:215] "Topology Admit Handler" podUID="e789867a-771b-4879-b010-02d710e5742a" podNamespace="kube-system" podName="storage-provisioner"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.952852    3020 topology_manager.go:215] "Topology Admit Handler" podUID="fb5aa3b9-e31c-486b-bc01-8faea6986d7c" podNamespace="default" podName="busybox-fc5497c4f-rwq2d"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.965604    3020 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.023630    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9727b60b-2689-4f26-9276-88efe3296374-xtables-lock\") pod \"kube-proxy-lcgp6\" (UID: \"9727b60b-2689-4f26-9276-88efe3296374\") " pod="kube-system/kube-proxy-lcgp6"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.023860    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9727b60b-2689-4f26-9276-88efe3296374-lib-modules\") pod \"kube-proxy-lcgp6\" (UID: \"9727b60b-2689-4f26-9276-88efe3296374\") " pod="kube-system/kube-proxy-lcgp6"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.024001    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/68820bdc-6391-4f97-ab90-8d100de2f0f1-cni-cfg\") pod \"kindnet-mrcqf\" (UID: \"68820bdc-6391-4f97-ab90-8d100de2f0f1\") " pod="kube-system/kindnet-mrcqf"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.024169    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68820bdc-6391-4f97-ab90-8d100de2f0f1-lib-modules\") pod \"kindnet-mrcqf\" (UID: \"68820bdc-6391-4f97-ab90-8d100de2f0f1\") " pod="kube-system/kindnet-mrcqf"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.024266    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68820bdc-6391-4f97-ab90-8d100de2f0f1-xtables-lock\") pod \"kindnet-mrcqf\" (UID: \"68820bdc-6391-4f97-ab90-8d100de2f0f1\") " pod="kube-system/kindnet-mrcqf"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.024350    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e789867a-771b-4879-b010-02d710e5742a-tmp\") pod \"storage-provisioner\" (UID: \"e789867a-771b-4879-b010-02d710e5742a\") " pod="kube-system/storage-provisioner"
	Jun 21 19:08:09 multinode-851952 kubelet[3020]: I0621 19:08:09.388489    3020 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 21 19:08:56 multinode-851952 kubelet[3020]: E0621 19:08:56.010463    3020 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 19:08:56 multinode-851952 kubelet[3020]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 19:08:56 multinode-851952 kubelet[3020]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 19:08:56 multinode-851952 kubelet[3020]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 19:08:56 multinode-851952 kubelet[3020]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0621 19:09:16.667018   47823 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19112-8111/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-851952 -n multinode-851952
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-851952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (300.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 stop
E0621 19:10:54.862622   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851952 stop: exit status 82 (2m0.468163494s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-851952-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-851952 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851952 status: exit status 3 (18.669962643s)

                                                
                                                
-- stdout --
	multinode-851952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-851952-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0621 19:11:39.574093   48516 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	E0621 19:11:39.574128   48516 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-851952 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-851952 -n multinode-851952
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-851952 logs -n 25: (1.375616058s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m02:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952:/home/docker/cp-test_multinode-851952-m02_multinode-851952.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n multinode-851952 sudo cat                                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /home/docker/cp-test_multinode-851952-m02_multinode-851952.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m02:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03:/home/docker/cp-test_multinode-851952-m02_multinode-851952-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n multinode-851952-m03 sudo cat                                   | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /home/docker/cp-test_multinode-851952-m02_multinode-851952-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp testdata/cp-test.txt                                                | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m03:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile293116882/001/cp-test_multinode-851952-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m03:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952:/home/docker/cp-test_multinode-851952-m03_multinode-851952.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n multinode-851952 sudo cat                                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /home/docker/cp-test_multinode-851952-m03_multinode-851952.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-851952 cp multinode-851952-m03:/home/docker/cp-test.txt                       | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m02:/home/docker/cp-test_multinode-851952-m03_multinode-851952-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n                                                                 | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | multinode-851952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-851952 ssh -n multinode-851952-m02 sudo cat                                   | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	|         | /home/docker/cp-test_multinode-851952-m03_multinode-851952-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-851952 node stop m03                                                          | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:03 UTC |
	| node    | multinode-851952 node start                                                             | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:03 UTC | 21 Jun 24 19:04 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-851952                                                                | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:04 UTC |                     |
	| stop    | -p multinode-851952                                                                     | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:04 UTC |                     |
	| start   | -p multinode-851952                                                                     | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:06 UTC | 21 Jun 24 19:09 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-851952                                                                | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:09 UTC |                     |
	| node    | multinode-851952 node delete                                                            | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:09 UTC | 21 Jun 24 19:09 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-851952 stop                                                                   | multinode-851952 | jenkins | v1.33.1 | 21 Jun 24 19:09 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 19:06:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 19:06:20.045149   46765 out.go:291] Setting OutFile to fd 1 ...
	I0621 19:06:20.045553   46765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:06:20.045564   46765 out.go:304] Setting ErrFile to fd 2...
	I0621 19:06:20.045569   46765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:06:20.045786   46765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 19:06:20.046359   46765 out.go:298] Setting JSON to false
	I0621 19:06:20.047239   46765 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6478,"bootTime":1718990302,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 19:06:20.047298   46765 start.go:139] virtualization: kvm guest
	I0621 19:06:20.049572   46765 out.go:177] * [multinode-851952] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 19:06:20.051045   46765 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 19:06:20.051052   46765 notify.go:220] Checking for updates...
	I0621 19:06:20.052311   46765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 19:06:20.053564   46765 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 19:06:20.055045   46765 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 19:06:20.056361   46765 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 19:06:20.057586   46765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 19:06:20.059244   46765 config.go:182] Loaded profile config "multinode-851952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:06:20.059351   46765 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 19:06:20.059761   46765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:06:20.059831   46765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:06:20.074865   46765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0621 19:06:20.075341   46765 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:06:20.075902   46765 main.go:141] libmachine: Using API Version  1
	I0621 19:06:20.075926   46765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:06:20.076245   46765 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:06:20.076441   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:06:20.110771   46765 out.go:177] * Using the kvm2 driver based on existing profile
	I0621 19:06:20.112015   46765 start.go:297] selected driver: kvm2
	I0621 19:06:20.112036   46765 start.go:901] validating driver "kvm2" against &{Name:multinode-851952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-851952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:06:20.112175   46765 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 19:06:20.112485   46765 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:06:20.112548   46765 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 19:06:20.127074   46765 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 19:06:20.127820   46765 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 19:06:20.127879   46765 cni.go:84] Creating CNI manager for ""
	I0621 19:06:20.127890   46765 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 19:06:20.127978   46765 start.go:340] cluster config:
	{Name:multinode-851952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-851952 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:06:20.128112   46765 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:06:20.129931   46765 out.go:177] * Starting "multinode-851952" primary control-plane node in "multinode-851952" cluster
	I0621 19:06:20.131079   46765 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 19:06:20.131112   46765 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 19:06:20.131121   46765 cache.go:56] Caching tarball of preloaded images
	I0621 19:06:20.131201   46765 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 19:06:20.131211   46765 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 19:06:20.131332   46765 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/config.json ...
	I0621 19:06:20.131519   46765 start.go:360] acquireMachinesLock for multinode-851952: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 19:06:20.131558   46765 start.go:364] duration metric: took 21.852µs to acquireMachinesLock for "multinode-851952"
	I0621 19:06:20.131572   46765 start.go:96] Skipping create...Using existing machine configuration
	I0621 19:06:20.131580   46765 fix.go:54] fixHost starting: 
	I0621 19:06:20.131826   46765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:06:20.131855   46765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:06:20.146031   46765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43453
	I0621 19:06:20.146410   46765 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:06:20.146866   46765 main.go:141] libmachine: Using API Version  1
	I0621 19:06:20.146888   46765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:06:20.147233   46765 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:06:20.147456   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:06:20.147588   46765 main.go:141] libmachine: (multinode-851952) Calling .GetState
	I0621 19:06:20.149044   46765 fix.go:112] recreateIfNeeded on multinode-851952: state=Running err=<nil>
	W0621 19:06:20.149059   46765 fix.go:138] unexpected machine state, will restart: <nil>
	I0621 19:06:20.151173   46765 out.go:177] * Updating the running kvm2 "multinode-851952" VM ...
	I0621 19:06:20.152449   46765 machine.go:94] provisionDockerMachine start ...
	I0621 19:06:20.152470   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:06:20.152656   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.155643   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.156192   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.156224   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.156367   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.156546   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.156678   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.156822   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.157043   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:06:20.157289   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:06:20.157313   46765 main.go:141] libmachine: About to run SSH command:
	hostname
	I0621 19:06:20.266796   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-851952
	
	I0621 19:06:20.266827   46765 main.go:141] libmachine: (multinode-851952) Calling .GetMachineName
	I0621 19:06:20.267069   46765 buildroot.go:166] provisioning hostname "multinode-851952"
	I0621 19:06:20.267089   46765 main.go:141] libmachine: (multinode-851952) Calling .GetMachineName
	I0621 19:06:20.267311   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.269998   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.270402   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.270427   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.270547   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.270723   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.270853   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.270994   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.271168   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:06:20.271400   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:06:20.271419   46765 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-851952 && echo "multinode-851952" | sudo tee /etc/hostname
	I0621 19:06:20.389464   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-851952
	
	I0621 19:06:20.389498   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.392333   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.392718   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.392750   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.392989   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.393156   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.393302   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.393412   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.393565   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:06:20.393740   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:06:20.393755   46765 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-851952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-851952/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-851952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 19:06:20.498431   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 19:06:20.498457   46765 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 19:06:20.498471   46765 buildroot.go:174] setting up certificates
	I0621 19:06:20.498480   46765 provision.go:84] configureAuth start
	I0621 19:06:20.498488   46765 main.go:141] libmachine: (multinode-851952) Calling .GetMachineName
	I0621 19:06:20.498764   46765 main.go:141] libmachine: (multinode-851952) Calling .GetIP
	I0621 19:06:20.501235   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.501562   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.501585   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.501708   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.503796   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.504177   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.504216   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.504273   46765 provision.go:143] copyHostCerts
	I0621 19:06:20.504306   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 19:06:20.504348   46765 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 19:06:20.504356   46765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 19:06:20.504418   46765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 19:06:20.504514   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 19:06:20.504532   46765 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 19:06:20.504539   46765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 19:06:20.504564   46765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 19:06:20.504619   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 19:06:20.504635   46765 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 19:06:20.504641   46765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 19:06:20.504661   46765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 19:06:20.504717   46765 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.multinode-851952 san=[127.0.0.1 192.168.39.146 localhost minikube multinode-851952]
	I0621 19:06:20.797647   46765 provision.go:177] copyRemoteCerts
	I0621 19:06:20.797710   46765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 19:06:20.797732   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.800244   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.800594   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.800631   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.800698   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.800885   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.801065   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.801215   46765 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952/id_rsa Username:docker}
	I0621 19:06:20.888463   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0621 19:06:20.888528   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 19:06:20.914567   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0621 19:06:20.914657   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0621 19:06:20.939338   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0621 19:06:20.939421   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 19:06:20.963344   46765 provision.go:87] duration metric: took 464.853396ms to configureAuth
	I0621 19:06:20.963375   46765 buildroot.go:189] setting minikube options for container-runtime
	I0621 19:06:20.963600   46765 config.go:182] Loaded profile config "multinode-851952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:06:20.963672   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:06:20.966795   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.967173   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:06:20.967215   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:06:20.967347   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:06:20.967555   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.967681   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:06:20.967803   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:06:20.967940   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:06:20.968147   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:06:20.968163   46765 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 19:07:51.817679   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 19:07:51.817712   46765 machine.go:97] duration metric: took 1m31.665249447s to provisionDockerMachine
	I0621 19:07:51.817724   46765 start.go:293] postStartSetup for "multinode-851952" (driver="kvm2")
	I0621 19:07:51.817733   46765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 19:07:51.817767   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:51.818121   46765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 19:07:51.818149   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:07:51.821081   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:51.821664   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:51.821700   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:51.821909   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:07:51.822103   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:51.822293   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:07:51.822541   46765 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952/id_rsa Username:docker}
	I0621 19:07:51.905158   46765 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 19:07:51.909014   46765 command_runner.go:130] > NAME=Buildroot
	I0621 19:07:51.909033   46765 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0621 19:07:51.909039   46765 command_runner.go:130] > ID=buildroot
	I0621 19:07:51.909047   46765 command_runner.go:130] > VERSION_ID=2023.02.9
	I0621 19:07:51.909054   46765 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0621 19:07:51.909159   46765 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 19:07:51.909184   46765 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 19:07:51.909282   46765 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 19:07:51.909360   46765 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 19:07:51.909370   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /etc/ssl/certs/153292.pem
	I0621 19:07:51.909458   46765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 19:07:51.918667   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 19:07:51.942307   46765 start.go:296] duration metric: took 124.571035ms for postStartSetup
	I0621 19:07:51.942345   46765 fix.go:56] duration metric: took 1m31.810765351s for fixHost
	I0621 19:07:51.942363   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:07:51.945262   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:51.945678   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:51.945707   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:51.945863   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:07:51.946055   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:51.946261   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:51.946398   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:07:51.946570   46765 main.go:141] libmachine: Using SSH client type: native
	I0621 19:07:51.946773   46765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0621 19:07:51.946789   46765 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0621 19:07:52.050586   46765 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718996872.032188392
	
	I0621 19:07:52.050609   46765 fix.go:216] guest clock: 1718996872.032188392
	I0621 19:07:52.050616   46765 fix.go:229] Guest: 2024-06-21 19:07:52.032188392 +0000 UTC Remote: 2024-06-21 19:07:51.942348587 +0000 UTC m=+91.931891791 (delta=89.839805ms)
	I0621 19:07:52.050635   46765 fix.go:200] guest clock delta is within tolerance: 89.839805ms
	I0621 19:07:52.050640   46765 start.go:83] releasing machines lock for "multinode-851952", held for 1m31.919073483s
	I0621 19:07:52.050656   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:52.050940   46765 main.go:141] libmachine: (multinode-851952) Calling .GetIP
	I0621 19:07:52.053927   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.054324   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:52.054355   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.054482   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:52.055012   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:52.055198   46765 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:07:52.055293   46765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 19:07:52.055371   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:07:52.055403   46765 ssh_runner.go:195] Run: cat /version.json
	I0621 19:07:52.055425   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:07:52.058334   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.058647   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:52.058669   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.058681   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.058835   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:07:52.059022   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:52.059173   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:52.059196   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:52.059209   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:07:52.059353   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:07:52.059355   46765 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952/id_rsa Username:docker}
	I0621 19:07:52.059602   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:07:52.059751   46765 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:07:52.059899   46765 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952/id_rsa Username:docker}
	I0621 19:07:52.167667   46765 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0621 19:07:52.168333   46765 command_runner.go:130] > {"iso_version": "v1.33.1-1718923868-19112", "kicbase_version": "v0.0.44-1718753665-19106", "minikube_version": "v1.33.1", "commit": "638985b67054e850774ca4205134dbef5391c341"}
	I0621 19:07:52.168487   46765 ssh_runner.go:195] Run: systemctl --version
	I0621 19:07:52.174385   46765 command_runner.go:130] > systemd 252 (252)
	I0621 19:07:52.174414   46765 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0621 19:07:52.174465   46765 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 19:07:52.329284   46765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0621 19:07:52.337836   46765 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0621 19:07:52.337973   46765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 19:07:52.338068   46765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 19:07:52.347385   46765 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0621 19:07:52.347411   46765 start.go:494] detecting cgroup driver to use...
	I0621 19:07:52.347476   46765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 19:07:52.363506   46765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 19:07:52.377050   46765 docker.go:217] disabling cri-docker service (if available) ...
	I0621 19:07:52.377104   46765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 19:07:52.390472   46765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 19:07:52.404662   46765 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 19:07:52.546574   46765 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 19:07:52.692768   46765 docker.go:233] disabling docker service ...
	I0621 19:07:52.692828   46765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 19:07:52.709742   46765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 19:07:52.723537   46765 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 19:07:52.870546   46765 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 19:07:53.019876   46765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 19:07:53.036410   46765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 19:07:53.053814   46765 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0621 19:07:53.054381   46765 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 19:07:53.054442   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.065320   46765 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 19:07:53.065390   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.075254   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.085013   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.095227   46765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 19:07:53.106168   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.116080   46765 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.126788   46765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:07:53.137243   46765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 19:07:53.146715   46765 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0621 19:07:53.146789   46765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 19:07:53.156273   46765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:07:53.295092   46765 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 19:07:53.536431   46765 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 19:07:53.536498   46765 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 19:07:53.541163   46765 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0621 19:07:53.541185   46765 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0621 19:07:53.541208   46765 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0621 19:07:53.541217   46765 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0621 19:07:53.541223   46765 command_runner.go:130] > Access: 2024-06-21 19:07:53.405195340 +0000
	I0621 19:07:53.541228   46765 command_runner.go:130] > Modify: 2024-06-21 19:07:53.405195340 +0000
	I0621 19:07:53.541242   46765 command_runner.go:130] > Change: 2024-06-21 19:07:53.405195340 +0000
	I0621 19:07:53.541251   46765 command_runner.go:130] >  Birth: -
	I0621 19:07:53.541444   46765 start.go:562] Will wait 60s for crictl version
	I0621 19:07:53.541516   46765 ssh_runner.go:195] Run: which crictl
	I0621 19:07:53.545026   46765 command_runner.go:130] > /usr/bin/crictl
	I0621 19:07:53.545092   46765 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 19:07:53.580590   46765 command_runner.go:130] > Version:  0.1.0
	I0621 19:07:53.580611   46765 command_runner.go:130] > RuntimeName:  cri-o
	I0621 19:07:53.580616   46765 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0621 19:07:53.580629   46765 command_runner.go:130] > RuntimeApiVersion:  v1
	I0621 19:07:53.582673   46765 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 19:07:53.582754   46765 ssh_runner.go:195] Run: crio --version
	I0621 19:07:53.610851   46765 command_runner.go:130] > crio version 1.29.1
	I0621 19:07:53.610872   46765 command_runner.go:130] > Version:        1.29.1
	I0621 19:07:53.610878   46765 command_runner.go:130] > GitCommit:      unknown
	I0621 19:07:53.610883   46765 command_runner.go:130] > GitCommitDate:  unknown
	I0621 19:07:53.610887   46765 command_runner.go:130] > GitTreeState:   clean
	I0621 19:07:53.610899   46765 command_runner.go:130] > BuildDate:      2024-06-21T04:36:35Z
	I0621 19:07:53.610904   46765 command_runner.go:130] > GoVersion:      go1.21.6
	I0621 19:07:53.610908   46765 command_runner.go:130] > Compiler:       gc
	I0621 19:07:53.610912   46765 command_runner.go:130] > Platform:       linux/amd64
	I0621 19:07:53.610916   46765 command_runner.go:130] > Linkmode:       dynamic
	I0621 19:07:53.610920   46765 command_runner.go:130] > BuildTags:      
	I0621 19:07:53.610924   46765 command_runner.go:130] >   containers_image_ostree_stub
	I0621 19:07:53.610928   46765 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0621 19:07:53.610931   46765 command_runner.go:130] >   btrfs_noversion
	I0621 19:07:53.610936   46765 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0621 19:07:53.610940   46765 command_runner.go:130] >   libdm_no_deferred_remove
	I0621 19:07:53.610946   46765 command_runner.go:130] >   seccomp
	I0621 19:07:53.610950   46765 command_runner.go:130] > LDFlags:          unknown
	I0621 19:07:53.610957   46765 command_runner.go:130] > SeccompEnabled:   true
	I0621 19:07:53.610961   46765 command_runner.go:130] > AppArmorEnabled:  false
	I0621 19:07:53.611033   46765 ssh_runner.go:195] Run: crio --version
	I0621 19:07:53.639215   46765 command_runner.go:130] > crio version 1.29.1
	I0621 19:07:53.639322   46765 command_runner.go:130] > Version:        1.29.1
	I0621 19:07:53.639490   46765 command_runner.go:130] > GitCommit:      unknown
	I0621 19:07:53.639511   46765 command_runner.go:130] > GitCommitDate:  unknown
	I0621 19:07:53.639518   46765 command_runner.go:130] > GitTreeState:   clean
	I0621 19:07:53.639527   46765 command_runner.go:130] > BuildDate:      2024-06-21T04:36:35Z
	I0621 19:07:53.639534   46765 command_runner.go:130] > GoVersion:      go1.21.6
	I0621 19:07:53.639540   46765 command_runner.go:130] > Compiler:       gc
	I0621 19:07:53.639553   46765 command_runner.go:130] > Platform:       linux/amd64
	I0621 19:07:53.640387   46765 command_runner.go:130] > Linkmode:       dynamic
	I0621 19:07:53.640407   46765 command_runner.go:130] > BuildTags:      
	I0621 19:07:53.640507   46765 command_runner.go:130] >   containers_image_ostree_stub
	I0621 19:07:53.640790   46765 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0621 19:07:53.640806   46765 command_runner.go:130] >   btrfs_noversion
	I0621 19:07:53.640811   46765 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0621 19:07:53.640816   46765 command_runner.go:130] >   libdm_no_deferred_remove
	I0621 19:07:53.640819   46765 command_runner.go:130] >   seccomp
	I0621 19:07:53.640824   46765 command_runner.go:130] > LDFlags:          unknown
	I0621 19:07:53.640828   46765 command_runner.go:130] > SeccompEnabled:   true
	I0621 19:07:53.640833   46765 command_runner.go:130] > AppArmorEnabled:  false
	I0621 19:07:53.643712   46765 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 19:07:53.645035   46765 main.go:141] libmachine: (multinode-851952) Calling .GetIP
	I0621 19:07:53.647772   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:53.648174   46765 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:07:53.648202   46765 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:07:53.648416   46765 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 19:07:53.652556   46765 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0621 19:07:53.652655   46765 kubeadm.go:877] updating cluster {Name:multinode-851952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-851952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 19:07:53.652763   46765 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 19:07:53.652801   46765 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 19:07:53.700872   46765 command_runner.go:130] > {
	I0621 19:07:53.700893   46765 command_runner.go:130] >   "images": [
	I0621 19:07:53.700899   46765 command_runner.go:130] >     {
	I0621 19:07:53.700913   46765 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0621 19:07:53.700921   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.700929   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0621 19:07:53.700932   46765 command_runner.go:130] >       ],
	I0621 19:07:53.700936   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.700945   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0621 19:07:53.700952   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0621 19:07:53.700955   46765 command_runner.go:130] >       ],
	I0621 19:07:53.700961   46765 command_runner.go:130] >       "size": "65908273",
	I0621 19:07:53.700965   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.700968   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.700973   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.700980   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.700983   46765 command_runner.go:130] >     },
	I0621 19:07:53.700987   46765 command_runner.go:130] >     {
	I0621 19:07:53.700992   46765 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0621 19:07:53.700999   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701004   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0621 19:07:53.701010   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701014   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701021   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0621 19:07:53.701028   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0621 19:07:53.701034   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701038   46765 command_runner.go:130] >       "size": "1363676",
	I0621 19:07:53.701054   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.701072   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701078   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701082   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701086   46765 command_runner.go:130] >     },
	I0621 19:07:53.701089   46765 command_runner.go:130] >     {
	I0621 19:07:53.701095   46765 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0621 19:07:53.701098   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701117   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0621 19:07:53.701127   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701131   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701141   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0621 19:07:53.701160   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0621 19:07:53.701167   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701171   46765 command_runner.go:130] >       "size": "31470524",
	I0621 19:07:53.701176   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.701180   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701184   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701190   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701193   46765 command_runner.go:130] >     },
	I0621 19:07:53.701197   46765 command_runner.go:130] >     {
	I0621 19:07:53.701203   46765 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0621 19:07:53.701210   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701215   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0621 19:07:53.701221   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701225   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701234   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0621 19:07:53.701247   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0621 19:07:53.701253   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701257   46765 command_runner.go:130] >       "size": "61245718",
	I0621 19:07:53.701261   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.701265   46765 command_runner.go:130] >       "username": "nonroot",
	I0621 19:07:53.701269   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701273   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701276   46765 command_runner.go:130] >     },
	I0621 19:07:53.701279   46765 command_runner.go:130] >     {
	I0621 19:07:53.701285   46765 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0621 19:07:53.701291   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701295   46765 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0621 19:07:53.701301   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701305   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701314   46765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0621 19:07:53.701324   46765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0621 19:07:53.701329   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701333   46765 command_runner.go:130] >       "size": "150779692",
	I0621 19:07:53.701339   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701343   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.701346   46765 command_runner.go:130] >       },
	I0621 19:07:53.701350   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701354   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701358   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701363   46765 command_runner.go:130] >     },
	I0621 19:07:53.701366   46765 command_runner.go:130] >     {
	I0621 19:07:53.701372   46765 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0621 19:07:53.701376   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701382   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0621 19:07:53.701387   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701392   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701399   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0621 19:07:53.701408   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0621 19:07:53.701414   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701419   46765 command_runner.go:130] >       "size": "117609954",
	I0621 19:07:53.701424   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701428   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.701434   46765 command_runner.go:130] >       },
	I0621 19:07:53.701439   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701445   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701449   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701452   46765 command_runner.go:130] >     },
	I0621 19:07:53.701455   46765 command_runner.go:130] >     {
	I0621 19:07:53.701461   46765 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0621 19:07:53.701467   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701472   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0621 19:07:53.701478   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701482   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701490   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0621 19:07:53.701500   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0621 19:07:53.701505   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701509   46765 command_runner.go:130] >       "size": "112194888",
	I0621 19:07:53.701515   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701519   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.701525   46765 command_runner.go:130] >       },
	I0621 19:07:53.701529   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701535   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701538   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701541   46765 command_runner.go:130] >     },
	I0621 19:07:53.701545   46765 command_runner.go:130] >     {
	I0621 19:07:53.701550   46765 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0621 19:07:53.701556   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701561   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0621 19:07:53.701565   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701568   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701587   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0621 19:07:53.701596   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0621 19:07:53.701600   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701604   46765 command_runner.go:130] >       "size": "85953433",
	I0621 19:07:53.701608   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.701615   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701619   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701622   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701626   46765 command_runner.go:130] >     },
	I0621 19:07:53.701629   46765 command_runner.go:130] >     {
	I0621 19:07:53.701634   46765 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0621 19:07:53.701638   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701642   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0621 19:07:53.701645   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701649   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701655   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0621 19:07:53.701662   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0621 19:07:53.701668   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701672   46765 command_runner.go:130] >       "size": "63051080",
	I0621 19:07:53.701680   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701686   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.701689   46765 command_runner.go:130] >       },
	I0621 19:07:53.701693   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701697   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701701   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.701705   46765 command_runner.go:130] >     },
	I0621 19:07:53.701708   46765 command_runner.go:130] >     {
	I0621 19:07:53.701714   46765 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0621 19:07:53.701718   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.701722   46765 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0621 19:07:53.701725   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701729   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.701735   46765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0621 19:07:53.701744   46765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0621 19:07:53.701748   46765 command_runner.go:130] >       ],
	I0621 19:07:53.701751   46765 command_runner.go:130] >       "size": "750414",
	I0621 19:07:53.701755   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.701759   46765 command_runner.go:130] >         "value": "65535"
	I0621 19:07:53.701763   46765 command_runner.go:130] >       },
	I0621 19:07:53.701767   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.701771   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.701778   46765 command_runner.go:130] >       "pinned": true
	I0621 19:07:53.701786   46765 command_runner.go:130] >     }
	I0621 19:07:53.701791   46765 command_runner.go:130] >   ]
	I0621 19:07:53.701808   46765 command_runner.go:130] > }
	I0621 19:07:53.702309   46765 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 19:07:53.702325   46765 crio.go:433] Images already preloaded, skipping extraction
	I0621 19:07:53.702368   46765 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 19:07:53.740443   46765 command_runner.go:130] > {
	I0621 19:07:53.740468   46765 command_runner.go:130] >   "images": [
	I0621 19:07:53.740472   46765 command_runner.go:130] >     {
	I0621 19:07:53.740480   46765 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0621 19:07:53.740485   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740491   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0621 19:07:53.740494   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740498   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740508   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0621 19:07:53.740515   46765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0621 19:07:53.740518   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740523   46765 command_runner.go:130] >       "size": "65908273",
	I0621 19:07:53.740527   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.740530   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.740539   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740543   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740547   46765 command_runner.go:130] >     },
	I0621 19:07:53.740551   46765 command_runner.go:130] >     {
	I0621 19:07:53.740560   46765 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0621 19:07:53.740564   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740569   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0621 19:07:53.740575   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740580   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740588   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0621 19:07:53.740596   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0621 19:07:53.740601   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740605   46765 command_runner.go:130] >       "size": "1363676",
	I0621 19:07:53.740610   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.740617   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.740626   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740632   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740637   46765 command_runner.go:130] >     },
	I0621 19:07:53.740640   46765 command_runner.go:130] >     {
	I0621 19:07:53.740647   46765 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0621 19:07:53.740651   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740658   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0621 19:07:53.740662   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740667   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740674   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0621 19:07:53.740685   46765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0621 19:07:53.740689   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740694   46765 command_runner.go:130] >       "size": "31470524",
	I0621 19:07:53.740701   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.740706   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.740712   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740717   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740723   46765 command_runner.go:130] >     },
	I0621 19:07:53.740727   46765 command_runner.go:130] >     {
	I0621 19:07:53.740736   46765 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0621 19:07:53.740743   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740748   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0621 19:07:53.740755   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740759   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740770   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0621 19:07:53.740781   46765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0621 19:07:53.740792   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740799   46765 command_runner.go:130] >       "size": "61245718",
	I0621 19:07:53.740804   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.740811   46765 command_runner.go:130] >       "username": "nonroot",
	I0621 19:07:53.740816   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740822   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740826   46765 command_runner.go:130] >     },
	I0621 19:07:53.740833   46765 command_runner.go:130] >     {
	I0621 19:07:53.740839   46765 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0621 19:07:53.740845   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740851   46765 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0621 19:07:53.740857   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740861   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740871   46765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0621 19:07:53.740885   46765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0621 19:07:53.740896   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740904   46765 command_runner.go:130] >       "size": "150779692",
	I0621 19:07:53.740912   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.740917   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.740923   46765 command_runner.go:130] >       },
	I0621 19:07:53.740928   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.740934   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.740939   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.740945   46765 command_runner.go:130] >     },
	I0621 19:07:53.740949   46765 command_runner.go:130] >     {
	I0621 19:07:53.740958   46765 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0621 19:07:53.740966   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.740971   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0621 19:07:53.740978   46765 command_runner.go:130] >       ],
	I0621 19:07:53.740982   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.740992   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0621 19:07:53.741002   46765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0621 19:07:53.741009   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741014   46765 command_runner.go:130] >       "size": "117609954",
	I0621 19:07:53.741020   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.741025   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.741031   46765 command_runner.go:130] >       },
	I0621 19:07:53.741036   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741044   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741052   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.741056   46765 command_runner.go:130] >     },
	I0621 19:07:53.741062   46765 command_runner.go:130] >     {
	I0621 19:07:53.741068   46765 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0621 19:07:53.741075   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.741081   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0621 19:07:53.741087   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741092   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.741102   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0621 19:07:53.741112   46765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0621 19:07:53.741119   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741130   46765 command_runner.go:130] >       "size": "112194888",
	I0621 19:07:53.741137   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.741141   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.741147   46765 command_runner.go:130] >       },
	I0621 19:07:53.741152   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741159   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741165   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.741173   46765 command_runner.go:130] >     },
	I0621 19:07:53.741177   46765 command_runner.go:130] >     {
	I0621 19:07:53.741186   46765 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0621 19:07:53.741193   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.741198   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0621 19:07:53.741205   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741209   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.741226   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0621 19:07:53.741236   46765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0621 19:07:53.741243   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741247   46765 command_runner.go:130] >       "size": "85953433",
	I0621 19:07:53.741254   46765 command_runner.go:130] >       "uid": null,
	I0621 19:07:53.741258   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741265   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741270   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.741276   46765 command_runner.go:130] >     },
	I0621 19:07:53.741280   46765 command_runner.go:130] >     {
	I0621 19:07:53.741290   46765 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0621 19:07:53.741297   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.741302   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0621 19:07:53.741308   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741313   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.741323   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0621 19:07:53.741333   46765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0621 19:07:53.741339   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741344   46765 command_runner.go:130] >       "size": "63051080",
	I0621 19:07:53.741350   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.741354   46765 command_runner.go:130] >         "value": "0"
	I0621 19:07:53.741360   46765 command_runner.go:130] >       },
	I0621 19:07:53.741364   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741371   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741376   46765 command_runner.go:130] >       "pinned": false
	I0621 19:07:53.741384   46765 command_runner.go:130] >     },
	I0621 19:07:53.741395   46765 command_runner.go:130] >     {
	I0621 19:07:53.741407   46765 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0621 19:07:53.741417   46765 command_runner.go:130] >       "repoTags": [
	I0621 19:07:53.741428   46765 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0621 19:07:53.741439   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741447   46765 command_runner.go:130] >       "repoDigests": [
	I0621 19:07:53.741459   46765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0621 19:07:53.741469   46765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0621 19:07:53.741473   46765 command_runner.go:130] >       ],
	I0621 19:07:53.741477   46765 command_runner.go:130] >       "size": "750414",
	I0621 19:07:53.741481   46765 command_runner.go:130] >       "uid": {
	I0621 19:07:53.741486   46765 command_runner.go:130] >         "value": "65535"
	I0621 19:07:53.741492   46765 command_runner.go:130] >       },
	I0621 19:07:53.741502   46765 command_runner.go:130] >       "username": "",
	I0621 19:07:53.741509   46765 command_runner.go:130] >       "spec": null,
	I0621 19:07:53.741519   46765 command_runner.go:130] >       "pinned": true
	I0621 19:07:53.741525   46765 command_runner.go:130] >     }
	I0621 19:07:53.741536   46765 command_runner.go:130] >   ]
	I0621 19:07:53.741542   46765 command_runner.go:130] > }
	I0621 19:07:53.741861   46765 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 19:07:53.741913   46765 cache_images.go:84] Images are preloaded, skipping loading
	I0621 19:07:53.741928   46765 kubeadm.go:928] updating node { 192.168.39.146 8443 v1.30.2 crio true true} ...
	I0621 19:07:53.742044   46765 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-851952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-851952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 19:07:53.742126   46765 ssh_runner.go:195] Run: crio config
	I0621 19:07:53.788616   46765 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0621 19:07:53.788650   46765 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0621 19:07:53.788660   46765 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0621 19:07:53.788665   46765 command_runner.go:130] > #
	I0621 19:07:53.788677   46765 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0621 19:07:53.788685   46765 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0621 19:07:53.788693   46765 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0621 19:07:53.788703   46765 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0621 19:07:53.788708   46765 command_runner.go:130] > # reload'.
	I0621 19:07:53.788718   46765 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0621 19:07:53.788729   46765 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0621 19:07:53.788741   46765 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0621 19:07:53.788751   46765 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0621 19:07:53.788761   46765 command_runner.go:130] > [crio]
	I0621 19:07:53.788772   46765 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0621 19:07:53.788784   46765 command_runner.go:130] > # containers images, in this directory.
	I0621 19:07:53.788814   46765 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0621 19:07:53.788841   46765 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0621 19:07:53.789641   46765 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0621 19:07:53.789666   46765 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0621 19:07:53.790476   46765 command_runner.go:130] > # imagestore = ""
	I0621 19:07:53.790492   46765 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0621 19:07:53.790501   46765 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0621 19:07:53.790647   46765 command_runner.go:130] > storage_driver = "overlay"
	I0621 19:07:53.790695   46765 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0621 19:07:53.790713   46765 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0621 19:07:53.790722   46765 command_runner.go:130] > storage_option = [
	I0621 19:07:53.790786   46765 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0621 19:07:53.790825   46765 command_runner.go:130] > ]
	I0621 19:07:53.790841   46765 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0621 19:07:53.790854   46765 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0621 19:07:53.791037   46765 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0621 19:07:53.791050   46765 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0621 19:07:53.791059   46765 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0621 19:07:53.791089   46765 command_runner.go:130] > # always happen on a node reboot
	I0621 19:07:53.791303   46765 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0621 19:07:53.791326   46765 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0621 19:07:53.791339   46765 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0621 19:07:53.791349   46765 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0621 19:07:53.791477   46765 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0621 19:07:53.791499   46765 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0621 19:07:53.791537   46765 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0621 19:07:53.791675   46765 command_runner.go:130] > # internal_wipe = true
	I0621 19:07:53.791691   46765 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0621 19:07:53.791697   46765 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0621 19:07:53.791779   46765 command_runner.go:130] > # internal_repair = false
	I0621 19:07:53.791795   46765 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0621 19:07:53.791803   46765 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0621 19:07:53.791813   46765 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0621 19:07:53.791874   46765 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0621 19:07:53.791888   46765 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0621 19:07:53.791894   46765 command_runner.go:130] > [crio.api]
	I0621 19:07:53.791902   46765 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0621 19:07:53.792075   46765 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0621 19:07:53.792088   46765 command_runner.go:130] > # IP address on which the stream server will listen.
	I0621 19:07:53.792095   46765 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0621 19:07:53.792106   46765 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0621 19:07:53.792119   46765 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0621 19:07:53.792315   46765 command_runner.go:130] > # stream_port = "0"
	I0621 19:07:53.792330   46765 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0621 19:07:53.792505   46765 command_runner.go:130] > # stream_enable_tls = false
	I0621 19:07:53.792516   46765 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0621 19:07:53.792680   46765 command_runner.go:130] > # stream_idle_timeout = ""
	I0621 19:07:53.792695   46765 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0621 19:07:53.792705   46765 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0621 19:07:53.792710   46765 command_runner.go:130] > # minutes.
	I0621 19:07:53.792831   46765 command_runner.go:130] > # stream_tls_cert = ""
	I0621 19:07:53.792845   46765 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0621 19:07:53.792855   46765 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0621 19:07:53.792985   46765 command_runner.go:130] > # stream_tls_key = ""
	I0621 19:07:53.792996   46765 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0621 19:07:53.793002   46765 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0621 19:07:53.793017   46765 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0621 19:07:53.793308   46765 command_runner.go:130] > # stream_tls_ca = ""
	I0621 19:07:53.793331   46765 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0621 19:07:53.793339   46765 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0621 19:07:53.793355   46765 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0621 19:07:53.793454   46765 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0621 19:07:53.793471   46765 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0621 19:07:53.793480   46765 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0621 19:07:53.793486   46765 command_runner.go:130] > [crio.runtime]
	I0621 19:07:53.793499   46765 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0621 19:07:53.793510   46765 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0621 19:07:53.793520   46765 command_runner.go:130] > # "nofile=1024:2048"
	I0621 19:07:53.793529   46765 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0621 19:07:53.793565   46765 command_runner.go:130] > # default_ulimits = [
	I0621 19:07:53.793644   46765 command_runner.go:130] > # ]
	I0621 19:07:53.793654   46765 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0621 19:07:53.793886   46765 command_runner.go:130] > # no_pivot = false
	I0621 19:07:53.793897   46765 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0621 19:07:53.793906   46765 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0621 19:07:53.794067   46765 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0621 19:07:53.794082   46765 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0621 19:07:53.794090   46765 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0621 19:07:53.794102   46765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0621 19:07:53.794181   46765 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0621 19:07:53.794196   46765 command_runner.go:130] > # Cgroup setting for conmon
	I0621 19:07:53.794207   46765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0621 19:07:53.794431   46765 command_runner.go:130] > conmon_cgroup = "pod"
	I0621 19:07:53.794445   46765 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0621 19:07:53.794451   46765 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0621 19:07:53.794457   46765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0621 19:07:53.794460   46765 command_runner.go:130] > conmon_env = [
	I0621 19:07:53.794668   46765 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0621 19:07:53.794679   46765 command_runner.go:130] > ]
	I0621 19:07:53.794688   46765 command_runner.go:130] > # Additional environment variables to set for all the
	I0621 19:07:53.794695   46765 command_runner.go:130] > # containers. These are overridden if set in the
	I0621 19:07:53.794703   46765 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0621 19:07:53.794826   46765 command_runner.go:130] > # default_env = [
	I0621 19:07:53.794947   46765 command_runner.go:130] > # ]
	I0621 19:07:53.794965   46765 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0621 19:07:53.794978   46765 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0621 19:07:53.795132   46765 command_runner.go:130] > # selinux = false
	I0621 19:07:53.795149   46765 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0621 19:07:53.795160   46765 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0621 19:07:53.795188   46765 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0621 19:07:53.795357   46765 command_runner.go:130] > # seccomp_profile = ""
	I0621 19:07:53.795375   46765 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0621 19:07:53.795382   46765 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0621 19:07:53.795391   46765 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0621 19:07:53.795407   46765 command_runner.go:130] > # which might increase security.
	I0621 19:07:53.795418   46765 command_runner.go:130] > # This option is currently deprecated,
	I0621 19:07:53.795434   46765 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0621 19:07:53.795446   46765 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0621 19:07:53.795457   46765 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0621 19:07:53.795470   46765 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0621 19:07:53.795478   46765 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0621 19:07:53.795489   46765 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0621 19:07:53.795504   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.795661   46765 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0621 19:07:53.795680   46765 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0621 19:07:53.795688   46765 command_runner.go:130] > # the cgroup blockio controller.
	I0621 19:07:53.795817   46765 command_runner.go:130] > # blockio_config_file = ""
	I0621 19:07:53.795834   46765 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0621 19:07:53.795842   46765 command_runner.go:130] > # blockio parameters.
	I0621 19:07:53.796044   46765 command_runner.go:130] > # blockio_reload = false
	I0621 19:07:53.796067   46765 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0621 19:07:53.796074   46765 command_runner.go:130] > # irqbalance daemon.
	I0621 19:07:53.796197   46765 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0621 19:07:53.796216   46765 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0621 19:07:53.796227   46765 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0621 19:07:53.796240   46765 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0621 19:07:53.796419   46765 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0621 19:07:53.796439   46765 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0621 19:07:53.796448   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.796553   46765 command_runner.go:130] > # rdt_config_file = ""
	I0621 19:07:53.796568   46765 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0621 19:07:53.796638   46765 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0621 19:07:53.796665   46765 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0621 19:07:53.796795   46765 command_runner.go:130] > # separate_pull_cgroup = ""
	I0621 19:07:53.796810   46765 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0621 19:07:53.796820   46765 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0621 19:07:53.796830   46765 command_runner.go:130] > # will be added.
	I0621 19:07:53.798104   46765 command_runner.go:130] > # default_capabilities = [
	I0621 19:07:53.798114   46765 command_runner.go:130] > # 	"CHOWN",
	I0621 19:07:53.798119   46765 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0621 19:07:53.798122   46765 command_runner.go:130] > # 	"FSETID",
	I0621 19:07:53.798126   46765 command_runner.go:130] > # 	"FOWNER",
	I0621 19:07:53.798129   46765 command_runner.go:130] > # 	"SETGID",
	I0621 19:07:53.798133   46765 command_runner.go:130] > # 	"SETUID",
	I0621 19:07:53.798138   46765 command_runner.go:130] > # 	"SETPCAP",
	I0621 19:07:53.798144   46765 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0621 19:07:53.798150   46765 command_runner.go:130] > # 	"KILL",
	I0621 19:07:53.798157   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798175   46765 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0621 19:07:53.798187   46765 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0621 19:07:53.798192   46765 command_runner.go:130] > # add_inheritable_capabilities = false
	I0621 19:07:53.798197   46765 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0621 19:07:53.798205   46765 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0621 19:07:53.798210   46765 command_runner.go:130] > default_sysctls = [
	I0621 19:07:53.798215   46765 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0621 19:07:53.798221   46765 command_runner.go:130] > ]
	I0621 19:07:53.798226   46765 command_runner.go:130] > # List of devices on the host that a
	I0621 19:07:53.798239   46765 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0621 19:07:53.798249   46765 command_runner.go:130] > # allowed_devices = [
	I0621 19:07:53.798260   46765 command_runner.go:130] > # 	"/dev/fuse",
	I0621 19:07:53.798269   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798278   46765 command_runner.go:130] > # List of additional devices. specified as
	I0621 19:07:53.798288   46765 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0621 19:07:53.798295   46765 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0621 19:07:53.798303   46765 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0621 19:07:53.798309   46765 command_runner.go:130] > # additional_devices = [
	I0621 19:07:53.798313   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798322   46765 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0621 19:07:53.798330   46765 command_runner.go:130] > # cdi_spec_dirs = [
	I0621 19:07:53.798340   46765 command_runner.go:130] > # 	"/etc/cdi",
	I0621 19:07:53.798350   46765 command_runner.go:130] > # 	"/var/run/cdi",
	I0621 19:07:53.798355   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798369   46765 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0621 19:07:53.798382   46765 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0621 19:07:53.798391   46765 command_runner.go:130] > # Defaults to false.
	I0621 19:07:53.798398   46765 command_runner.go:130] > # device_ownership_from_security_context = false
	I0621 19:07:53.798407   46765 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0621 19:07:53.798415   46765 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0621 19:07:53.798421   46765 command_runner.go:130] > # hooks_dir = [
	I0621 19:07:53.798426   46765 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0621 19:07:53.798432   46765 command_runner.go:130] > # ]
	I0621 19:07:53.798442   46765 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0621 19:07:53.798456   46765 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0621 19:07:53.798468   46765 command_runner.go:130] > # its default mounts from the following two files:
	I0621 19:07:53.798476   46765 command_runner.go:130] > #
	I0621 19:07:53.798506   46765 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0621 19:07:53.798517   46765 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0621 19:07:53.798522   46765 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0621 19:07:53.798528   46765 command_runner.go:130] > #
	I0621 19:07:53.798535   46765 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0621 19:07:53.798549   46765 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0621 19:07:53.798563   46765 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0621 19:07:53.798575   46765 command_runner.go:130] > #      only add mounts it finds in this file.
	I0621 19:07:53.798583   46765 command_runner.go:130] > #
	I0621 19:07:53.798590   46765 command_runner.go:130] > # default_mounts_file = ""
	I0621 19:07:53.798602   46765 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0621 19:07:53.798616   46765 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0621 19:07:53.798630   46765 command_runner.go:130] > pids_limit = 1024
	I0621 19:07:53.798640   46765 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0621 19:07:53.798653   46765 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0621 19:07:53.798667   46765 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0621 19:07:53.798684   46765 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0621 19:07:53.798694   46765 command_runner.go:130] > # log_size_max = -1
	I0621 19:07:53.798708   46765 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0621 19:07:53.798717   46765 command_runner.go:130] > # log_to_journald = false
	I0621 19:07:53.798730   46765 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0621 19:07:53.798738   46765 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0621 19:07:53.798747   46765 command_runner.go:130] > # Path to directory for container attach sockets.
	I0621 19:07:53.798759   46765 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0621 19:07:53.798771   46765 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0621 19:07:53.798782   46765 command_runner.go:130] > # bind_mount_prefix = ""
	I0621 19:07:53.798794   46765 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0621 19:07:53.798804   46765 command_runner.go:130] > # read_only = false
	I0621 19:07:53.798817   46765 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0621 19:07:53.798829   46765 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0621 19:07:53.798837   46765 command_runner.go:130] > # live configuration reload.
	I0621 19:07:53.798844   46765 command_runner.go:130] > # log_level = "info"
	I0621 19:07:53.798853   46765 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0621 19:07:53.798865   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.798872   46765 command_runner.go:130] > # log_filter = ""
	I0621 19:07:53.798885   46765 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0621 19:07:53.798899   46765 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0621 19:07:53.798909   46765 command_runner.go:130] > # separated by comma.
	I0621 19:07:53.798923   46765 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0621 19:07:53.798931   46765 command_runner.go:130] > # uid_mappings = ""
	I0621 19:07:53.798937   46765 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0621 19:07:53.798949   46765 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0621 19:07:53.798958   46765 command_runner.go:130] > # separated by comma.
	I0621 19:07:53.798971   46765 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0621 19:07:53.798981   46765 command_runner.go:130] > # gid_mappings = ""
	I0621 19:07:53.798993   46765 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0621 19:07:53.799006   46765 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0621 19:07:53.799019   46765 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0621 19:07:53.799030   46765 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0621 19:07:53.799038   46765 command_runner.go:130] > # minimum_mappable_uid = -1
	I0621 19:07:53.799050   46765 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0621 19:07:53.799063   46765 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0621 19:07:53.799075   46765 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0621 19:07:53.799090   46765 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0621 19:07:53.799100   46765 command_runner.go:130] > # minimum_mappable_gid = -1
	I0621 19:07:53.799112   46765 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0621 19:07:53.799126   46765 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0621 19:07:53.799134   46765 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0621 19:07:53.799142   46765 command_runner.go:130] > # ctr_stop_timeout = 30
	I0621 19:07:53.799152   46765 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0621 19:07:53.799169   46765 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0621 19:07:53.799179   46765 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0621 19:07:53.799190   46765 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0621 19:07:53.799200   46765 command_runner.go:130] > drop_infra_ctr = false
	I0621 19:07:53.799212   46765 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0621 19:07:53.799224   46765 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0621 19:07:53.799235   46765 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0621 19:07:53.799240   46765 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0621 19:07:53.799255   46765 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0621 19:07:53.799268   46765 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0621 19:07:53.799277   46765 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0621 19:07:53.799288   46765 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0621 19:07:53.799299   46765 command_runner.go:130] > # shared_cpuset = ""
	I0621 19:07:53.799308   46765 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0621 19:07:53.799316   46765 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0621 19:07:53.799321   46765 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0621 19:07:53.799333   46765 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0621 19:07:53.799343   46765 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0621 19:07:53.799356   46765 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0621 19:07:53.799370   46765 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0621 19:07:53.799376   46765 command_runner.go:130] > # enable_criu_support = false
	I0621 19:07:53.799382   46765 command_runner.go:130] > # Enable/disable the generation of the container,
	I0621 19:07:53.799391   46765 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0621 19:07:53.799398   46765 command_runner.go:130] > # enable_pod_events = false
	I0621 19:07:53.799408   46765 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0621 19:07:53.799421   46765 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0621 19:07:53.799432   46765 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0621 19:07:53.799441   46765 command_runner.go:130] > # default_runtime = "runc"
	I0621 19:07:53.799451   46765 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0621 19:07:53.799463   46765 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0621 19:07:53.799479   46765 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0621 19:07:53.799489   46765 command_runner.go:130] > # creation as a file is not desired either.
	I0621 19:07:53.799518   46765 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0621 19:07:53.799533   46765 command_runner.go:130] > # the hostname is being managed dynamically.
	I0621 19:07:53.799543   46765 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0621 19:07:53.799551   46765 command_runner.go:130] > # ]
	I0621 19:07:53.799562   46765 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0621 19:07:53.799576   46765 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0621 19:07:53.799588   46765 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0621 19:07:53.799599   46765 command_runner.go:130] > # Each entry in the table should follow the format:
	I0621 19:07:53.799606   46765 command_runner.go:130] > #
	I0621 19:07:53.799613   46765 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0621 19:07:53.799618   46765 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0621 19:07:53.799652   46765 command_runner.go:130] > # runtime_type = "oci"
	I0621 19:07:53.799659   46765 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0621 19:07:53.799663   46765 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0621 19:07:53.799670   46765 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0621 19:07:53.799675   46765 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0621 19:07:53.799681   46765 command_runner.go:130] > # monitor_env = []
	I0621 19:07:53.799686   46765 command_runner.go:130] > # privileged_without_host_devices = false
	I0621 19:07:53.799692   46765 command_runner.go:130] > # allowed_annotations = []
	I0621 19:07:53.799699   46765 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0621 19:07:53.799704   46765 command_runner.go:130] > # Where:
	I0621 19:07:53.799709   46765 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0621 19:07:53.799717   46765 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0621 19:07:53.799726   46765 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0621 19:07:53.799732   46765 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0621 19:07:53.799738   46765 command_runner.go:130] > #   in $PATH.
	I0621 19:07:53.799744   46765 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0621 19:07:53.799750   46765 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0621 19:07:53.799756   46765 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0621 19:07:53.799763   46765 command_runner.go:130] > #   state.
	I0621 19:07:53.799768   46765 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0621 19:07:53.799776   46765 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0621 19:07:53.799782   46765 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0621 19:07:53.799789   46765 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0621 19:07:53.799794   46765 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0621 19:07:53.799802   46765 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0621 19:07:53.799809   46765 command_runner.go:130] > #   The currently recognized values are:
	I0621 19:07:53.799815   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0621 19:07:53.799824   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0621 19:07:53.799829   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0621 19:07:53.799837   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0621 19:07:53.799846   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0621 19:07:53.799854   46765 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0621 19:07:53.799860   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0621 19:07:53.799869   46765 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0621 19:07:53.799875   46765 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0621 19:07:53.799883   46765 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0621 19:07:53.799888   46765 command_runner.go:130] > #   deprecated option "conmon".
	I0621 19:07:53.799896   46765 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0621 19:07:53.799903   46765 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0621 19:07:53.799908   46765 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0621 19:07:53.799916   46765 command_runner.go:130] > #   should be moved to the container's cgroup
	I0621 19:07:53.799929   46765 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0621 19:07:53.799936   46765 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0621 19:07:53.799944   46765 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0621 19:07:53.799951   46765 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0621 19:07:53.799955   46765 command_runner.go:130] > #
	I0621 19:07:53.799960   46765 command_runner.go:130] > # Using the seccomp notifier feature:
	I0621 19:07:53.799963   46765 command_runner.go:130] > #
	I0621 19:07:53.799969   46765 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0621 19:07:53.799977   46765 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0621 19:07:53.799983   46765 command_runner.go:130] > #
	I0621 19:07:53.799989   46765 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0621 19:07:53.799998   46765 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0621 19:07:53.800001   46765 command_runner.go:130] > #
	I0621 19:07:53.800006   46765 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0621 19:07:53.800012   46765 command_runner.go:130] > # feature.
	I0621 19:07:53.800015   46765 command_runner.go:130] > #
	I0621 19:07:53.800020   46765 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0621 19:07:53.800026   46765 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0621 19:07:53.800034   46765 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0621 19:07:53.800042   46765 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0621 19:07:53.800047   46765 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0621 19:07:53.800053   46765 command_runner.go:130] > #
	I0621 19:07:53.800058   46765 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0621 19:07:53.800066   46765 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0621 19:07:53.800069   46765 command_runner.go:130] > #
	I0621 19:07:53.800077   46765 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0621 19:07:53.800085   46765 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0621 19:07:53.800087   46765 command_runner.go:130] > #
	I0621 19:07:53.800094   46765 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0621 19:07:53.800101   46765 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0621 19:07:53.800105   46765 command_runner.go:130] > # limitation.
	I0621 19:07:53.800112   46765 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0621 19:07:53.800116   46765 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0621 19:07:53.800122   46765 command_runner.go:130] > runtime_type = "oci"
	I0621 19:07:53.800126   46765 command_runner.go:130] > runtime_root = "/run/runc"
	I0621 19:07:53.800133   46765 command_runner.go:130] > runtime_config_path = ""
	I0621 19:07:53.800141   46765 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0621 19:07:53.800146   46765 command_runner.go:130] > monitor_cgroup = "pod"
	I0621 19:07:53.800152   46765 command_runner.go:130] > monitor_exec_cgroup = ""
	I0621 19:07:53.800167   46765 command_runner.go:130] > monitor_env = [
	I0621 19:07:53.800176   46765 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0621 19:07:53.800184   46765 command_runner.go:130] > ]
	I0621 19:07:53.800191   46765 command_runner.go:130] > privileged_without_host_devices = false
	I0621 19:07:53.800205   46765 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0621 19:07:53.800216   46765 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0621 19:07:53.800227   46765 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0621 19:07:53.800238   46765 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0621 19:07:53.800249   46765 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0621 19:07:53.800254   46765 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0621 19:07:53.800263   46765 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0621 19:07:53.800272   46765 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0621 19:07:53.800277   46765 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0621 19:07:53.800286   46765 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0621 19:07:53.800290   46765 command_runner.go:130] > # Example:
	I0621 19:07:53.800296   46765 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0621 19:07:53.800301   46765 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0621 19:07:53.800308   46765 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0621 19:07:53.800313   46765 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0621 19:07:53.800319   46765 command_runner.go:130] > # cpuset = 0
	I0621 19:07:53.800322   46765 command_runner.go:130] > # cpushares = "0-1"
	I0621 19:07:53.800328   46765 command_runner.go:130] > # Where:
	I0621 19:07:53.800332   46765 command_runner.go:130] > # The workload name is workload-type.
	I0621 19:07:53.800338   46765 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0621 19:07:53.800346   46765 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0621 19:07:53.800354   46765 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0621 19:07:53.800361   46765 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0621 19:07:53.800369   46765 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0621 19:07:53.800375   46765 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0621 19:07:53.800384   46765 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0621 19:07:53.800390   46765 command_runner.go:130] > # Default value is set to true
	I0621 19:07:53.800394   46765 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0621 19:07:53.800402   46765 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0621 19:07:53.800407   46765 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0621 19:07:53.800414   46765 command_runner.go:130] > # Default value is set to 'false'
	I0621 19:07:53.800419   46765 command_runner.go:130] > # disable_hostport_mapping = false
	I0621 19:07:53.800428   46765 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0621 19:07:53.800433   46765 command_runner.go:130] > #
	I0621 19:07:53.800439   46765 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0621 19:07:53.800447   46765 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0621 19:07:53.800455   46765 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0621 19:07:53.800461   46765 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0621 19:07:53.800466   46765 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0621 19:07:53.800469   46765 command_runner.go:130] > [crio.image]
	I0621 19:07:53.800474   46765 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0621 19:07:53.800478   46765 command_runner.go:130] > # default_transport = "docker://"
	I0621 19:07:53.800484   46765 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0621 19:07:53.800489   46765 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0621 19:07:53.800493   46765 command_runner.go:130] > # global_auth_file = ""
	I0621 19:07:53.800498   46765 command_runner.go:130] > # The image used to instantiate infra containers.
	I0621 19:07:53.800502   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.800507   46765 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0621 19:07:53.800512   46765 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0621 19:07:53.800518   46765 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0621 19:07:53.800522   46765 command_runner.go:130] > # This option supports live configuration reload.
	I0621 19:07:53.800526   46765 command_runner.go:130] > # pause_image_auth_file = ""
	I0621 19:07:53.800531   46765 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0621 19:07:53.800536   46765 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0621 19:07:53.800541   46765 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0621 19:07:53.800546   46765 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0621 19:07:53.800550   46765 command_runner.go:130] > # pause_command = "/pause"
	I0621 19:07:53.800555   46765 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0621 19:07:53.800560   46765 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0621 19:07:53.800566   46765 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0621 19:07:53.800571   46765 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0621 19:07:53.800576   46765 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0621 19:07:53.800582   46765 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0621 19:07:53.800585   46765 command_runner.go:130] > # pinned_images = [
	I0621 19:07:53.800588   46765 command_runner.go:130] > # ]
	I0621 19:07:53.800599   46765 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0621 19:07:53.800605   46765 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0621 19:07:53.800611   46765 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0621 19:07:53.800616   46765 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0621 19:07:53.800620   46765 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0621 19:07:53.800624   46765 command_runner.go:130] > # signature_policy = ""
	I0621 19:07:53.800628   46765 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0621 19:07:53.800636   46765 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0621 19:07:53.800641   46765 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0621 19:07:53.800649   46765 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0621 19:07:53.800654   46765 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0621 19:07:53.800661   46765 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0621 19:07:53.800666   46765 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0621 19:07:53.800674   46765 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0621 19:07:53.800681   46765 command_runner.go:130] > # changing them here.
	I0621 19:07:53.800684   46765 command_runner.go:130] > # insecure_registries = [
	I0621 19:07:53.800690   46765 command_runner.go:130] > # ]
	I0621 19:07:53.800696   46765 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0621 19:07:53.800703   46765 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0621 19:07:53.800707   46765 command_runner.go:130] > # image_volumes = "mkdir"
	I0621 19:07:53.800714   46765 command_runner.go:130] > # Temporary directory to use for storing big files
	I0621 19:07:53.800717   46765 command_runner.go:130] > # big_files_temporary_dir = ""
	I0621 19:07:53.800725   46765 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0621 19:07:53.800732   46765 command_runner.go:130] > # CNI plugins.
	I0621 19:07:53.800735   46765 command_runner.go:130] > [crio.network]
	I0621 19:07:53.800741   46765 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0621 19:07:53.800748   46765 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0621 19:07:53.800752   46765 command_runner.go:130] > # cni_default_network = ""
	I0621 19:07:53.800760   46765 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0621 19:07:53.800766   46765 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0621 19:07:53.800771   46765 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0621 19:07:53.800778   46765 command_runner.go:130] > # plugin_dirs = [
	I0621 19:07:53.800781   46765 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0621 19:07:53.800787   46765 command_runner.go:130] > # ]
	I0621 19:07:53.800792   46765 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0621 19:07:53.800798   46765 command_runner.go:130] > [crio.metrics]
	I0621 19:07:53.800807   46765 command_runner.go:130] > # Globally enable or disable metrics support.
	I0621 19:07:53.800813   46765 command_runner.go:130] > enable_metrics = true
	I0621 19:07:53.800818   46765 command_runner.go:130] > # Specify enabled metrics collectors.
	I0621 19:07:53.800824   46765 command_runner.go:130] > # Per default all metrics are enabled.
	I0621 19:07:53.800830   46765 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0621 19:07:53.800838   46765 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0621 19:07:53.800846   46765 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0621 19:07:53.800849   46765 command_runner.go:130] > # metrics_collectors = [
	I0621 19:07:53.800853   46765 command_runner.go:130] > # 	"operations",
	I0621 19:07:53.800858   46765 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0621 19:07:53.800864   46765 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0621 19:07:53.800868   46765 command_runner.go:130] > # 	"operations_errors",
	I0621 19:07:53.800874   46765 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0621 19:07:53.800878   46765 command_runner.go:130] > # 	"image_pulls_by_name",
	I0621 19:07:53.800885   46765 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0621 19:07:53.800889   46765 command_runner.go:130] > # 	"image_pulls_failures",
	I0621 19:07:53.800895   46765 command_runner.go:130] > # 	"image_pulls_successes",
	I0621 19:07:53.800899   46765 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0621 19:07:53.800903   46765 command_runner.go:130] > # 	"image_layer_reuse",
	I0621 19:07:53.800910   46765 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0621 19:07:53.800918   46765 command_runner.go:130] > # 	"containers_oom_total",
	I0621 19:07:53.800922   46765 command_runner.go:130] > # 	"containers_oom",
	I0621 19:07:53.800926   46765 command_runner.go:130] > # 	"processes_defunct",
	I0621 19:07:53.800930   46765 command_runner.go:130] > # 	"operations_total",
	I0621 19:07:53.800935   46765 command_runner.go:130] > # 	"operations_latency_seconds",
	I0621 19:07:53.800939   46765 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0621 19:07:53.800945   46765 command_runner.go:130] > # 	"operations_errors_total",
	I0621 19:07:53.800949   46765 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0621 19:07:53.800956   46765 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0621 19:07:53.800960   46765 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0621 19:07:53.800967   46765 command_runner.go:130] > # 	"image_pulls_success_total",
	I0621 19:07:53.800971   46765 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0621 19:07:53.800978   46765 command_runner.go:130] > # 	"containers_oom_count_total",
	I0621 19:07:53.800982   46765 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0621 19:07:53.800986   46765 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0621 19:07:53.800992   46765 command_runner.go:130] > # ]
	I0621 19:07:53.800998   46765 command_runner.go:130] > # The port on which the metrics server will listen.
	I0621 19:07:53.801004   46765 command_runner.go:130] > # metrics_port = 9090
	I0621 19:07:53.801018   46765 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0621 19:07:53.801022   46765 command_runner.go:130] > # metrics_socket = ""
	I0621 19:07:53.801027   46765 command_runner.go:130] > # The certificate for the secure metrics server.
	I0621 19:07:53.801033   46765 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0621 19:07:53.801041   46765 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0621 19:07:53.801047   46765 command_runner.go:130] > # certificate on any modification event.
	I0621 19:07:53.801051   46765 command_runner.go:130] > # metrics_cert = ""
	I0621 19:07:53.801058   46765 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0621 19:07:53.801063   46765 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0621 19:07:53.801067   46765 command_runner.go:130] > # metrics_key = ""
	I0621 19:07:53.801072   46765 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0621 19:07:53.801078   46765 command_runner.go:130] > [crio.tracing]
	I0621 19:07:53.801084   46765 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0621 19:07:53.801090   46765 command_runner.go:130] > # enable_tracing = false
	I0621 19:07:53.801095   46765 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0621 19:07:53.801101   46765 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0621 19:07:53.801107   46765 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0621 19:07:53.801114   46765 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0621 19:07:53.801118   46765 command_runner.go:130] > # CRI-O NRI configuration.
	I0621 19:07:53.801123   46765 command_runner.go:130] > [crio.nri]
	I0621 19:07:53.801127   46765 command_runner.go:130] > # Globally enable or disable NRI.
	I0621 19:07:53.801135   46765 command_runner.go:130] > # enable_nri = false
	I0621 19:07:53.801141   46765 command_runner.go:130] > # NRI socket to listen on.
	I0621 19:07:53.801151   46765 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0621 19:07:53.801164   46765 command_runner.go:130] > # NRI plugin directory to use.
	I0621 19:07:53.801175   46765 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0621 19:07:53.801182   46765 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0621 19:07:53.801193   46765 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0621 19:07:53.801201   46765 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0621 19:07:53.801210   46765 command_runner.go:130] > # nri_disable_connections = false
	I0621 19:07:53.801216   46765 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0621 19:07:53.801223   46765 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0621 19:07:53.801228   46765 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0621 19:07:53.801235   46765 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0621 19:07:53.801246   46765 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0621 19:07:53.801252   46765 command_runner.go:130] > [crio.stats]
	I0621 19:07:53.801260   46765 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0621 19:07:53.801267   46765 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0621 19:07:53.801272   46765 command_runner.go:130] > # stats_collection_period = 0
	I0621 19:07:53.801305   46765 command_runner.go:130] ! time="2024-06-21 19:07:53.760785412Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0621 19:07:53.801318   46765 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0621 19:07:53.801406   46765 cni.go:84] Creating CNI manager for ""
	I0621 19:07:53.801416   46765 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0621 19:07:53.801423   46765 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 19:07:53.801442   46765 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-851952 NodeName:multinode-851952 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 19:07:53.801568   46765 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-851952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 19:07:53.801620   46765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 19:07:53.811659   46765 command_runner.go:130] > kubeadm
	I0621 19:07:53.811685   46765 command_runner.go:130] > kubectl
	I0621 19:07:53.811692   46765 command_runner.go:130] > kubelet
	I0621 19:07:53.811732   46765 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 19:07:53.811786   46765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0621 19:07:53.821183   46765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0621 19:07:53.837267   46765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 19:07:53.852812   46765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0621 19:07:53.868545   46765 ssh_runner.go:195] Run: grep 192.168.39.146	control-plane.minikube.internal$ /etc/hosts
	I0621 19:07:53.872025   46765 command_runner.go:130] > 192.168.39.146	control-plane.minikube.internal
	I0621 19:07:53.872171   46765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:07:54.017064   46765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 19:07:54.032446   46765 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952 for IP: 192.168.39.146
	I0621 19:07:54.032470   46765 certs.go:194] generating shared ca certs ...
	I0621 19:07:54.032493   46765 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:07:54.032680   46765 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 19:07:54.032738   46765 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 19:07:54.032753   46765 certs.go:256] generating profile certs ...
	I0621 19:07:54.032864   46765 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/client.key
	I0621 19:07:54.032974   46765 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.key.d197130b
	I0621 19:07:54.033031   46765 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.key
	I0621 19:07:54.033047   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0621 19:07:54.033070   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0621 19:07:54.033092   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0621 19:07:54.033112   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0621 19:07:54.033133   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0621 19:07:54.033152   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0621 19:07:54.033175   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0621 19:07:54.033191   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0621 19:07:54.033259   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 19:07:54.033304   46765 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 19:07:54.033319   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 19:07:54.033357   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 19:07:54.033395   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 19:07:54.033431   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 19:07:54.033489   46765 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 19:07:54.033540   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem -> /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.033563   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.033583   46765 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.034406   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 19:07:54.058066   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 19:07:54.080170   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 19:07:54.102474   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 19:07:54.124098   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0621 19:07:54.146047   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 19:07:54.168689   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 19:07:54.191456   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/multinode-851952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0621 19:07:54.216292   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 19:07:54.239770   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 19:07:54.261111   46765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 19:07:54.282610   46765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 19:07:54.297659   46765 ssh_runner.go:195] Run: openssl version
	I0621 19:07:54.303098   46765 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0621 19:07:54.303183   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 19:07:54.313948   46765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.317922   46765 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.317950   46765 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.317986   46765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 19:07:54.322958   46765 command_runner.go:130] > 51391683
	I0621 19:07:54.323123   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 19:07:54.332603   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 19:07:54.344115   46765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.348293   46765 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.348314   46765 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.348349   46765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 19:07:54.353448   46765 command_runner.go:130] > 3ec20f2e
	I0621 19:07:54.353573   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 19:07:54.362495   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 19:07:54.372987   46765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.377014   46765 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.377169   46765 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.377207   46765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:07:54.382227   46765 command_runner.go:130] > b5213941
	I0621 19:07:54.382388   46765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 19:07:54.391940   46765 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 19:07:54.396528   46765 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 19:07:54.396552   46765 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0621 19:07:54.396557   46765 command_runner.go:130] > Device: 253,1	Inode: 6292501     Links: 1
	I0621 19:07:54.396563   46765 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0621 19:07:54.396568   46765 command_runner.go:130] > Access: 2024-06-21 19:01:50.569403511 +0000
	I0621 19:07:54.396573   46765 command_runner.go:130] > Modify: 2024-06-21 19:01:50.569403511 +0000
	I0621 19:07:54.396578   46765 command_runner.go:130] > Change: 2024-06-21 19:01:50.569403511 +0000
	I0621 19:07:54.396583   46765 command_runner.go:130] >  Birth: 2024-06-21 19:01:50.569403511 +0000
	I0621 19:07:54.396655   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0621 19:07:54.402030   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.402236   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0621 19:07:54.407455   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.407673   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0621 19:07:54.413128   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.413303   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0621 19:07:54.418723   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.418786   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0621 19:07:54.423928   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.424138   46765 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0621 19:07:54.429083   46765 command_runner.go:130] > Certificate will not expire
	I0621 19:07:54.429231   46765 kubeadm.go:391] StartCluster: {Name:multinode-851952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-851952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:07:54.429380   46765 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 19:07:54.429429   46765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 19:07:54.465678   46765 command_runner.go:130] > d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497
	I0621 19:07:54.465708   46765 command_runner.go:130] > 55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359
	I0621 19:07:54.465718   46765 command_runner.go:130] > 36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83
	I0621 19:07:54.465729   46765 command_runner.go:130] > 9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2
	I0621 19:07:54.465737   46765 command_runner.go:130] > 02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c
	I0621 19:07:54.465746   46765 command_runner.go:130] > 736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c
	I0621 19:07:54.465755   46765 command_runner.go:130] > 77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1
	I0621 19:07:54.465770   46765 command_runner.go:130] > 40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd
	I0621 19:07:54.465806   46765 cri.go:89] found id: "d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497"
	I0621 19:07:54.465818   46765 cri.go:89] found id: "55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359"
	I0621 19:07:54.465824   46765 cri.go:89] found id: "36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83"
	I0621 19:07:54.465828   46765 cri.go:89] found id: "9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2"
	I0621 19:07:54.465832   46765 cri.go:89] found id: "02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c"
	I0621 19:07:54.465837   46765 cri.go:89] found id: "736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c"
	I0621 19:07:54.465841   46765 cri.go:89] found id: "77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1"
	I0621 19:07:54.465845   46765 cri.go:89] found id: "40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd"
	I0621 19:07:54.465849   46765 cri.go:89] found id: ""
	I0621 19:07:54.465892   46765 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.179544284Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718997100179520221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=581b02cc-eb67-45c7-8e8b-7ee4b70cc13b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.180056151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b215135f-f138-47b8-95f5-350b1dace6f7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.180125232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b215135f-f138-47b8-95f5-350b1dace6f7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.180500407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89439fcc1faf71b39c69b6a49edcbc1b6ef6fea006f079a6e358e1f90c3fecc2,PodSandboxId:11ffe81acbb509d6e0065ceda3e866ebfbe28073ff1690800bacf6fb1bf8fd2b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718996915220929518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173,PodSandboxId:8de73709691a3b536d412aa59c89afea9748eebfdd169c7ff833decf6dfedd92,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718996881829444853,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a,PodSandboxId:de3f9b7d54bb3b4b481c96e19a9dae56796e2caa60345f32ed8b757156b1c514,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718996881629457186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302,PodSandboxId:e1a72fca3965a39e75a5f23dddc1a4baf47d16ef093ca24ff1c7674fb943b0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718996881547572635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-88efe3296374,},Annotations:map[string]
string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedc6977a27755e7fca1a63c1a9d00f1f0a54d82eb2a3187c77142615620d46c,PodSandboxId:57df3e36569309ca23c946b1b7dc2e5d36bd036346295db69c2c58dc58f8dbd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718996881487065756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.ku
bernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2,PodSandboxId:b0b8ca34537094fd7a9f711b801ac3c6686630582e7ccc9c42d2abbaccd297fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718996876686496232,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e,PodSandboxId:64993081fa8fff04f4f1dbcce496c8024a02ae07915a4d8d8f7d952613b684e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718996876705109488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2929e396,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d,PodSandboxId:0154e8f660b7e7d416a8a8ed92578b387760a78e869e971dc4c01acd5d7797bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718996876697284640,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49,PodSandboxId:720bfedbf7fc66572645bef4a5387a6ebfd7f71fd44b76a818b5b724fa9ea1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718996876596211428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.container.hash: a2b1940a,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e796e77879a17462fdc8d1e3c5bdb29549cdfd9e2f6e289a21a6e43b02a4d331,PodSandboxId:ea35aa521b03b7fec8d5fc6be4a34df88045d1d48b103149e23be06f072d7307,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718996582296495624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497,PodSandboxId:4fb33dfce476ed3823e8cca1f72ed14304faa62f3b41d1dfa1ab27273fe35ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718996534883309651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359,PodSandboxId:85cb6ca1d24caad46359b6da0ba5d7fef334a953ca31936ea5facd138aa034f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718996534811058203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.kubernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83,PodSandboxId:37894f95939c33d031fb7adf9cda5a47b3dc82a9dbfd34fe898761937ab04af4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718996533343332351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2,PodSandboxId:e1ed586db133d068777a4a215969814542284b04d8298438220678fba936ea1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718996532465193333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-
88efe3296374,},Annotations:map[string]string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c,PodSandboxId:f5f532e3c35f66380c8143c9e540c938aeeba0dd60a49015303fd6952fa2dc57,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718996513757373692,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{
io.kubernetes.container.hash: 2929e396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c,PodSandboxId:8f18790ab0368780fe3ac2954123025233266fb448a70fc0a4179487baaa7a70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718996513731665505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1,PodSandboxId:7c79852f0ef58d2fd5cddb43f247fbd33f807747284ae3b9a450f82832050f49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718996513719491108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd,PodSandboxId:d7d511623babc445d61565e6e4603b379b5ec9e9dae0a1cf899e328e6b73c2ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718996513652670101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a2b1940a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b215135f-f138-47b8-95f5-350b1dace6f7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.225856252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c33ef447-7f5d-4f31-be0f-559c7f93c3fc name=/runtime.v1.RuntimeService/Version
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.225948588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c33ef447-7f5d-4f31-be0f-559c7f93c3fc name=/runtime.v1.RuntimeService/Version
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.227069331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=245f8aae-4ebd-4d85-a588-1cc10ca62f11 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.227594697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718997100227571124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=245f8aae-4ebd-4d85-a588-1cc10ca62f11 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.228461588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39267e71-2b44-4438-accd-63b54ae13b01 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.228529153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39267e71-2b44-4438-accd-63b54ae13b01 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.228871142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89439fcc1faf71b39c69b6a49edcbc1b6ef6fea006f079a6e358e1f90c3fecc2,PodSandboxId:11ffe81acbb509d6e0065ceda3e866ebfbe28073ff1690800bacf6fb1bf8fd2b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718996915220929518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173,PodSandboxId:8de73709691a3b536d412aa59c89afea9748eebfdd169c7ff833decf6dfedd92,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718996881829444853,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a,PodSandboxId:de3f9b7d54bb3b4b481c96e19a9dae56796e2caa60345f32ed8b757156b1c514,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718996881629457186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302,PodSandboxId:e1a72fca3965a39e75a5f23dddc1a4baf47d16ef093ca24ff1c7674fb943b0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718996881547572635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-88efe3296374,},Annotations:map[string]
string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedc6977a27755e7fca1a63c1a9d00f1f0a54d82eb2a3187c77142615620d46c,PodSandboxId:57df3e36569309ca23c946b1b7dc2e5d36bd036346295db69c2c58dc58f8dbd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718996881487065756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.ku
bernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2,PodSandboxId:b0b8ca34537094fd7a9f711b801ac3c6686630582e7ccc9c42d2abbaccd297fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718996876686496232,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e,PodSandboxId:64993081fa8fff04f4f1dbcce496c8024a02ae07915a4d8d8f7d952613b684e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718996876705109488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2929e396,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d,PodSandboxId:0154e8f660b7e7d416a8a8ed92578b387760a78e869e971dc4c01acd5d7797bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718996876697284640,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49,PodSandboxId:720bfedbf7fc66572645bef4a5387a6ebfd7f71fd44b76a818b5b724fa9ea1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718996876596211428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.container.hash: a2b1940a,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e796e77879a17462fdc8d1e3c5bdb29549cdfd9e2f6e289a21a6e43b02a4d331,PodSandboxId:ea35aa521b03b7fec8d5fc6be4a34df88045d1d48b103149e23be06f072d7307,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718996582296495624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497,PodSandboxId:4fb33dfce476ed3823e8cca1f72ed14304faa62f3b41d1dfa1ab27273fe35ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718996534883309651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359,PodSandboxId:85cb6ca1d24caad46359b6da0ba5d7fef334a953ca31936ea5facd138aa034f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718996534811058203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.kubernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83,PodSandboxId:37894f95939c33d031fb7adf9cda5a47b3dc82a9dbfd34fe898761937ab04af4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718996533343332351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2,PodSandboxId:e1ed586db133d068777a4a215969814542284b04d8298438220678fba936ea1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718996532465193333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-
88efe3296374,},Annotations:map[string]string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c,PodSandboxId:f5f532e3c35f66380c8143c9e540c938aeeba0dd60a49015303fd6952fa2dc57,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718996513757373692,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{
io.kubernetes.container.hash: 2929e396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c,PodSandboxId:8f18790ab0368780fe3ac2954123025233266fb448a70fc0a4179487baaa7a70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718996513731665505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1,PodSandboxId:7c79852f0ef58d2fd5cddb43f247fbd33f807747284ae3b9a450f82832050f49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718996513719491108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd,PodSandboxId:d7d511623babc445d61565e6e4603b379b5ec9e9dae0a1cf899e328e6b73c2ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718996513652670101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a2b1940a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39267e71-2b44-4438-accd-63b54ae13b01 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.268459477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32be2957-99ee-400d-945e-7c06864a1db5 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.268528156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32be2957-99ee-400d-945e-7c06864a1db5 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.269600432Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b13cacf5-5c22-4eee-b2f2-2820d9822b96 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.270034778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718997100270012160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b13cacf5-5c22-4eee-b2f2-2820d9822b96 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.270620492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=003f3a81-82b9-4421-8c8b-606cc6f5e34c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.270699213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=003f3a81-82b9-4421-8c8b-606cc6f5e34c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.271066206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89439fcc1faf71b39c69b6a49edcbc1b6ef6fea006f079a6e358e1f90c3fecc2,PodSandboxId:11ffe81acbb509d6e0065ceda3e866ebfbe28073ff1690800bacf6fb1bf8fd2b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718996915220929518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173,PodSandboxId:8de73709691a3b536d412aa59c89afea9748eebfdd169c7ff833decf6dfedd92,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718996881829444853,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a,PodSandboxId:de3f9b7d54bb3b4b481c96e19a9dae56796e2caa60345f32ed8b757156b1c514,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718996881629457186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302,PodSandboxId:e1a72fca3965a39e75a5f23dddc1a4baf47d16ef093ca24ff1c7674fb943b0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718996881547572635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-88efe3296374,},Annotations:map[string]
string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedc6977a27755e7fca1a63c1a9d00f1f0a54d82eb2a3187c77142615620d46c,PodSandboxId:57df3e36569309ca23c946b1b7dc2e5d36bd036346295db69c2c58dc58f8dbd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718996881487065756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.ku
bernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2,PodSandboxId:b0b8ca34537094fd7a9f711b801ac3c6686630582e7ccc9c42d2abbaccd297fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718996876686496232,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e,PodSandboxId:64993081fa8fff04f4f1dbcce496c8024a02ae07915a4d8d8f7d952613b684e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718996876705109488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2929e396,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d,PodSandboxId:0154e8f660b7e7d416a8a8ed92578b387760a78e869e971dc4c01acd5d7797bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718996876697284640,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49,PodSandboxId:720bfedbf7fc66572645bef4a5387a6ebfd7f71fd44b76a818b5b724fa9ea1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718996876596211428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.container.hash: a2b1940a,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e796e77879a17462fdc8d1e3c5bdb29549cdfd9e2f6e289a21a6e43b02a4d331,PodSandboxId:ea35aa521b03b7fec8d5fc6be4a34df88045d1d48b103149e23be06f072d7307,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718996582296495624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497,PodSandboxId:4fb33dfce476ed3823e8cca1f72ed14304faa62f3b41d1dfa1ab27273fe35ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718996534883309651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359,PodSandboxId:85cb6ca1d24caad46359b6da0ba5d7fef334a953ca31936ea5facd138aa034f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718996534811058203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.kubernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83,PodSandboxId:37894f95939c33d031fb7adf9cda5a47b3dc82a9dbfd34fe898761937ab04af4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718996533343332351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2,PodSandboxId:e1ed586db133d068777a4a215969814542284b04d8298438220678fba936ea1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718996532465193333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-
88efe3296374,},Annotations:map[string]string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c,PodSandboxId:f5f532e3c35f66380c8143c9e540c938aeeba0dd60a49015303fd6952fa2dc57,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718996513757373692,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{
io.kubernetes.container.hash: 2929e396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c,PodSandboxId:8f18790ab0368780fe3ac2954123025233266fb448a70fc0a4179487baaa7a70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718996513731665505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1,PodSandboxId:7c79852f0ef58d2fd5cddb43f247fbd33f807747284ae3b9a450f82832050f49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718996513719491108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd,PodSandboxId:d7d511623babc445d61565e6e4603b379b5ec9e9dae0a1cf899e328e6b73c2ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718996513652670101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a2b1940a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=003f3a81-82b9-4421-8c8b-606cc6f5e34c name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.311221645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=866ec87a-065d-4bd0-9957-575b28fab3ec name=/runtime.v1.RuntimeService/Version
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.311313188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=866ec87a-065d-4bd0-9957-575b28fab3ec name=/runtime.v1.RuntimeService/Version
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.312563822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c00738e-3a41-464b-bdb0-5d103bc38fbb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.313126569Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718997100313103632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133264,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c00738e-3a41-464b-bdb0-5d103bc38fbb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.313675491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d38c83df-a079-4256-b774-5aad0c522349 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.313741962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d38c83df-a079-4256-b774-5aad0c522349 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:11:40 multinode-851952 crio[2806]: time="2024-06-21 19:11:40.314198136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89439fcc1faf71b39c69b6a49edcbc1b6ef6fea006f079a6e358e1f90c3fecc2,PodSandboxId:11ffe81acbb509d6e0065ceda3e866ebfbe28073ff1690800bacf6fb1bf8fd2b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718996915220929518,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173,PodSandboxId:8de73709691a3b536d412aa59c89afea9748eebfdd169c7ff833decf6dfedd92,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718996881829444853,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a,PodSandboxId:de3f9b7d54bb3b4b481c96e19a9dae56796e2caa60345f32ed8b757156b1c514,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718996881629457186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302,PodSandboxId:e1a72fca3965a39e75a5f23dddc1a4baf47d16ef093ca24ff1c7674fb943b0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718996881547572635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-88efe3296374,},Annotations:map[string]
string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedc6977a27755e7fca1a63c1a9d00f1f0a54d82eb2a3187c77142615620d46c,PodSandboxId:57df3e36569309ca23c946b1b7dc2e5d36bd036346295db69c2c58dc58f8dbd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718996881487065756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.ku
bernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2,PodSandboxId:b0b8ca34537094fd7a9f711b801ac3c6686630582e7ccc9c42d2abbaccd297fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718996876686496232,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e,PodSandboxId:64993081fa8fff04f4f1dbcce496c8024a02ae07915a4d8d8f7d952613b684e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718996876705109488,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2929e396,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d,PodSandboxId:0154e8f660b7e7d416a8a8ed92578b387760a78e869e971dc4c01acd5d7797bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718996876697284640,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49,PodSandboxId:720bfedbf7fc66572645bef4a5387a6ebfd7f71fd44b76a818b5b724fa9ea1f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718996876596211428,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.container.hash: a2b1940a,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e796e77879a17462fdc8d1e3c5bdb29549cdfd9e2f6e289a21a6e43b02a4d331,PodSandboxId:ea35aa521b03b7fec8d5fc6be4a34df88045d1d48b103149e23be06f072d7307,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718996582296495624,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-rwq2d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5aa3b9-e31c-486b-bc01-8faea6986d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 8bec9b05,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497,PodSandboxId:4fb33dfce476ed3823e8cca1f72ed14304faa62f3b41d1dfa1ab27273fe35ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718996534883309651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hfwfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abeb3c8-683d-4272-ae28-0193331f528d,},Annotations:map[string]string{io.kubernetes.container.hash: 7d20c077,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c61aaf731d1c5b583250944ecfc821dc3d84cda2e4811057a76e46f1f7e359,PodSandboxId:85cb6ca1d24caad46359b6da0ba5d7fef334a953ca31936ea5facd138aa034f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718996534811058203,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e789867a-771b-4879-b010-02d710e5742a,},Annotations:map[string]string{io.kubernetes.container.hash: cf259908,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83,PodSandboxId:37894f95939c33d031fb7adf9cda5a47b3dc82a9dbfd34fe898761937ab04af4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718996533343332351,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mrcqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 68820bdc-6391-4f97-ab90-8d100de2f0f1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c4d278e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2,PodSandboxId:e1ed586db133d068777a4a215969814542284b04d8298438220678fba936ea1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718996532465193333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lcgp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9727b60b-2689-4f26-9276-
88efe3296374,},Annotations:map[string]string{io.kubernetes.container.hash: 4f380793,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c,PodSandboxId:f5f532e3c35f66380c8143c9e540c938aeeba0dd60a49015303fd6952fa2dc57,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718996513757373692,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9986c671c608acd2d2a568d12af3b4,},Annotations:map[string]string{
io.kubernetes.container.hash: 2929e396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c,PodSandboxId:8f18790ab0368780fe3ac2954123025233266fb448a70fc0a4179487baaa7a70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718996513731665505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031ccabb4efca1565643eb6b5f5e2ec8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1,PodSandboxId:7c79852f0ef58d2fd5cddb43f247fbd33f807747284ae3b9a450f82832050f49,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718996513719491108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbec4e210bed61a23dcce0a53847ec6c,},Annotations:map[string]string{io.
kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd,PodSandboxId:d7d511623babc445d61565e6e4603b379b5ec9e9dae0a1cf899e328e6b73c2ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718996513652670101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-851952,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1f2f20f6ad7034c0592078e31b5614,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: a2b1940a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d38c83df-a079-4256-b774-5aad0c522349 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	89439fcc1faf7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   11ffe81acbb50       busybox-fc5497c4f-rwq2d
	e6c4b975ffa0b       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   8de73709691a3       kindnet-mrcqf
	0d83a92ace1ce       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   de3f9b7d54bb3       coredns-7db6d8ff4d-hfwfj
	c77c4e18ef1f9       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      3 minutes ago       Running             kube-proxy                1                   e1a72fca3965a       kube-proxy-lcgp6
	bedc6977a2775       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   57df3e3656930       storage-provisioner
	5542071560d99       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   64993081fa8ff       etcd-multinode-851952
	19e6c1b76c674       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      3 minutes ago       Running             kube-controller-manager   1                   0154e8f660b7e       kube-controller-manager-multinode-851952
	1932382e2a018       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      3 minutes ago       Running             kube-scheduler            1                   b0b8ca3453709       kube-scheduler-multinode-851952
	48fda169ce764       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      3 minutes ago       Running             kube-apiserver            1                   720bfedbf7fc6       kube-apiserver-multinode-851952
	e796e77879a17       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   ea35aa521b03b       busybox-fc5497c4f-rwq2d
	d4fd10189beef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   4fb33dfce476e       coredns-7db6d8ff4d-hfwfj
	55c61aaf731d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   85cb6ca1d24ca       storage-provisioner
	36ce441ec2d19       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      9 minutes ago       Exited              kindnet-cni               0                   37894f95939c3       kindnet-mrcqf
	9da10767b93f9       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      9 minutes ago       Exited              kube-proxy                0                   e1ed586db133d       kube-proxy-lcgp6
	02bcd841d722f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   f5f532e3c35f6       etcd-multinode-851952
	736b6d5218441       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      9 minutes ago       Exited              kube-controller-manager   0                   8f18790ab0368       kube-controller-manager-multinode-851952
	77ba488fac51d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      9 minutes ago       Exited              kube-scheduler            0                   7c79852f0ef58       kube-scheduler-multinode-851952
	40087081e25d8       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      9 minutes ago       Exited              kube-apiserver            0                   d7d511623babc       kube-apiserver-multinode-851952
	
	
	==> coredns [0d83a92ace1cee76af2d2a4d4514bb4b9d0fad8467cf635f92b479fb7e23808a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57474 - 33214 "HINFO IN 3843566766519598785.8947686938218715761. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020281459s
	
	
	==> coredns [d4fd10189beef0ec38e8cb9f7a74f819461e323eae4b3b6bbddfef6886151497] <==
	[INFO] 10.244.0.3:38027 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001894799s
	[INFO] 10.244.0.3:38366 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093222s
	[INFO] 10.244.0.3:35759 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086773s
	[INFO] 10.244.0.3:58948 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001232911s
	[INFO] 10.244.0.3:55070 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081996s
	[INFO] 10.244.0.3:48492 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060519s
	[INFO] 10.244.0.3:40108 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064581s
	[INFO] 10.244.1.2:51451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108775s
	[INFO] 10.244.1.2:43578 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102936s
	[INFO] 10.244.1.2:37621 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076841s
	[INFO] 10.244.1.2:33016 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067514s
	[INFO] 10.244.0.3:38865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142526s
	[INFO] 10.244.0.3:37222 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084343s
	[INFO] 10.244.0.3:36593 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042012s
	[INFO] 10.244.0.3:46334 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065568s
	[INFO] 10.244.1.2:35833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134489s
	[INFO] 10.244.1.2:43015 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215657s
	[INFO] 10.244.1.2:51209 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116353s
	[INFO] 10.244.1.2:43487 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011957s
	[INFO] 10.244.0.3:60990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090172s
	[INFO] 10.244.0.3:47397 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080504s
	[INFO] 10.244.0.3:57033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105755s
	[INFO] 10.244.0.3:39863 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070608s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-851952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=multinode-851952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T19_01_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 19:01:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851952
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 19:11:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 19:08:00 +0000   Fri, 21 Jun 2024 19:01:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 19:08:00 +0000   Fri, 21 Jun 2024 19:01:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 19:08:00 +0000   Fri, 21 Jun 2024 19:01:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 19:08:00 +0000   Fri, 21 Jun 2024 19:02:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    multinode-851952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f78df2d7ac4a44e2bd7b850a69238045
	  System UUID:                f78df2d7-ac4a-44e2-bd7b-850a69238045
	  Boot ID:                    03a98d64-ee80-454b-bc41-587e302c9c98
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rwq2d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  kube-system                 coredns-7db6d8ff4d-hfwfj                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m28s
	  kube-system                 etcd-multinode-851952                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m42s
	  kube-system                 kindnet-mrcqf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m29s
	  kube-system                 kube-apiserver-multinode-851952             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 kube-controller-manager-multinode-851952    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 kube-proxy-lcgp6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 kube-scheduler-multinode-851952             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m27s                  kube-proxy       
	  Normal  Starting                 3m38s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m42s (x2 over 9m42s)  kubelet          Node multinode-851952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m42s (x2 over 9m42s)  kubelet          Node multinode-851952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m42s (x2 over 9m42s)  kubelet          Node multinode-851952 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m42s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m29s                  node-controller  Node multinode-851952 event: Registered Node multinode-851952 in Controller
	  Normal  NodeReady                9m26s                  kubelet          Node multinode-851952 status is now: NodeReady
	  Normal  Starting                 3m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s (x8 over 3m44s)  kubelet          Node multinode-851952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x8 over 3m44s)  kubelet          Node multinode-851952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x7 over 3m44s)  kubelet          Node multinode-851952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m28s                  node-controller  Node multinode-851952 event: Registered Node multinode-851952 in Controller
	
	
	Name:               multinode-851952-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-851952-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=multinode-851952
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_21T19_08_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 19:08:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-851952-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 19:09:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 21 Jun 2024 19:09:09 +0000   Fri, 21 Jun 2024 19:10:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 21 Jun 2024 19:09:09 +0000   Fri, 21 Jun 2024 19:10:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 21 Jun 2024 19:09:09 +0000   Fri, 21 Jun 2024 19:10:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 21 Jun 2024 19:09:09 +0000   Fri, 21 Jun 2024 19:10:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    multinode-851952-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8271f20fbcd24ec09b78fd28c81fb7db
	  System UUID:                8271f20f-bcd2-4ec0-9b78-fd28c81fb7db
	  Boot ID:                    4c4aeb8f-7a73-4eee-bbc5-551a745965dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6s5z7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 kindnet-s78xt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m53s
	  kube-system                 kube-proxy-lsb9b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m48s                  kube-proxy       
	  Normal  Starting                 2m58s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    8m53s (x3 over 8m53s)  kubelet          Node multinode-851952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m53s (x3 over 8m53s)  kubelet          Node multinode-851952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m53s (x3 over 8m53s)  kubelet          Node multinode-851952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                8m44s                  kubelet          Node multinode-851952-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)    kubelet          Node multinode-851952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)    kubelet          Node multinode-851952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)    kubelet          Node multinode-851952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                  node-controller  Node multinode-851952-m02 event: Registered Node multinode-851952-m02 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node multinode-851952-m02 status is now: NodeReady
	  Normal  NodeNotReady             98s                    node-controller  Node multinode-851952-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +7.067123] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.058349] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054929] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.163287] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.130066] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.258501] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.890307] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +3.444602] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.060031] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989200] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.096631] kauditd_printk_skb: 69 callbacks suppressed
	[Jun21 19:02] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.779460] systemd-fstab-generator[1452]: Ignoring "noauto" option for root device
	[ +47.264708] kauditd_printk_skb: 84 callbacks suppressed
	[Jun21 19:07] systemd-fstab-generator[2724]: Ignoring "noauto" option for root device
	[  +0.159150] systemd-fstab-generator[2736]: Ignoring "noauto" option for root device
	[  +0.172053] systemd-fstab-generator[2750]: Ignoring "noauto" option for root device
	[  +0.149964] systemd-fstab-generator[2762]: Ignoring "noauto" option for root device
	[  +0.275310] systemd-fstab-generator[2791]: Ignoring "noauto" option for root device
	[  +0.721263] systemd-fstab-generator[2891]: Ignoring "noauto" option for root device
	[  +1.830056] systemd-fstab-generator[3013]: Ignoring "noauto" option for root device
	[Jun21 19:08] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.357287] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.362773] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[ +21.028873] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [02bcd841d722fb9c576107bda76adbf87c3593aa8234019fbc016f3d25c3e44c] <==
	{"level":"info","ts":"2024-06-21T19:01:56.659981Z","caller":"traceutil/trace.go:171","msg":"trace[1007002446] range","detail":"{range_begin:/registry/minions/multinode-851952; range_end:; response_count:1; response_revision:17; }","duration":"381.864173ms","start":"2024-06-21T19:01:56.278111Z","end":"2024-06-21T19:01:56.659975Z","steps":["trace[1007002446] 'agreement among raft nodes before linearized reading'  (duration: 381.822894ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:01:56.659995Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:01:56.278104Z","time spent":"381.887598ms","remote":"127.0.0.1:51826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":4153,"request content":"key:\"/registry/minions/multinode-851952\" "}
	{"level":"warn","ts":"2024-06-21T19:02:47.581962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.142106ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8751778564018602588 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-2rff2\" mod_revision:446 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-2rff2\" value_size:2301 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-2rff2\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:02:47.582066Z","caller":"traceutil/trace.go:171","msg":"trace[1161547719] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"256.169508ms","start":"2024-06-21T19:02:47.325878Z","end":"2024-06-21T19:02:47.582047Z","steps":["trace[1161547719] 'process raft request'  (duration: 103.558178ms)","trace[1161547719] 'compare'  (duration: 151.893494ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:02:53.34323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.669743ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8751778564018602677 > lease_revoke:<id:7974903c2d35022d>","response":"size:28"}
	{"level":"info","ts":"2024-06-21T19:02:53.34331Z","caller":"traceutil/trace.go:171","msg":"trace[223530444] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:506; }","duration":"176.948375ms","start":"2024-06-21T19:02:53.166351Z","end":"2024-06-21T19:02:53.3433Z","steps":["trace[223530444] 'read index received'  (duration: 48.136682ms)","trace[223530444] 'applied index is now lower than readState.Index'  (duration: 128.811057ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:02:53.34338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.011996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-06-21T19:02:53.343398Z","caller":"traceutil/trace.go:171","msg":"trace[443458560] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:485; }","duration":"177.063288ms","start":"2024-06-21T19:02:53.166326Z","end":"2024-06-21T19:02:53.34339Z","steps":["trace[443458560] 'agreement among raft nodes before linearized reading'  (duration: 176.999966ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T19:02:53.752355Z","caller":"traceutil/trace.go:171","msg":"trace[109371543] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"143.337216ms","start":"2024-06-21T19:02:53.609003Z","end":"2024-06-21T19:02:53.75234Z","steps":["trace[109371543] 'process raft request'  (duration: 143.18782ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:03:29.608496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.044752ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8751778564018602979 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-851952-m03.17db1a4efc4c2e32\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-851952-m03.17db1a4efc4c2e32\" value_size:642 lease:8751778564018602744 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:03:29.608815Z","caller":"traceutil/trace.go:171","msg":"trace[1541725632] linearizableReadLoop","detail":"{readStateIndex:601; appliedIndex:599; }","duration":"145.735042ms","start":"2024-06-21T19:03:29.463061Z","end":"2024-06-21T19:03:29.608796Z","steps":["trace[1541725632] 'read index received'  (duration: 145.139281ms)","trace[1541725632] 'applied index is now lower than readState.Index'  (duration: 595.064µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T19:03:29.608894Z","caller":"traceutil/trace.go:171","msg":"trace[153832864] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"254.112781ms","start":"2024-06-21T19:03:29.354775Z","end":"2024-06-21T19:03:29.608888Z","steps":["trace[153832864] 'process raft request'  (duration: 102.621438ms)","trace[153832864] 'compare'  (duration: 150.87439ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:03:29.60907Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.983143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-851952-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-06-21T19:03:29.609199Z","caller":"traceutil/trace.go:171","msg":"trace[1470733736] range","detail":"{range_begin:/registry/minions/multinode-851952-m03; range_end:; response_count:1; response_revision:571; }","duration":"146.089683ms","start":"2024-06-21T19:03:29.463037Z","end":"2024-06-21T19:03:29.609127Z","steps":["trace[1470733736] 'agreement among raft nodes before linearized reading'  (duration: 145.926816ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T19:03:29.609595Z","caller":"traceutil/trace.go:171","msg":"trace[1010777253] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"205.617943ms","start":"2024-06-21T19:03:29.403963Z","end":"2024-06-21T19:03:29.609581Z","steps":["trace[1010777253] 'process raft request'  (duration: 204.792384ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T19:06:21.090015Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-21T19:06:21.09017Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-851952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	{"level":"warn","ts":"2024-06-21T19:06:21.090298Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:06:21.090413Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:06:21.176089Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:06:21.17621Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-21T19:06:21.177814Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fc85001aa37e7974","current-leader-member-id":"fc85001aa37e7974"}
	{"level":"info","ts":"2024-06-21T19:06:21.180369Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-06-21T19:06:21.180494Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-06-21T19:06:21.180505Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-851952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	
	
	==> etcd [5542071560d99295e74d925dd1e1d98c6a2b5f390f06a009e0c29e5386fa968e] <==
	{"level":"info","ts":"2024-06-21T19:07:57.180625Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T19:07:57.180706Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T19:07:57.182655Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-21T19:07:57.193439Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fc85001aa37e7974","initial-advertise-peer-urls":["https://192.168.39.146:2380"],"listen-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.146:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-21T19:07:57.193699Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-21T19:07:57.185487Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-06-21T19:07:57.203208Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-06-21T19:07:58.811857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-21T19:07:58.811915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-21T19:07:58.811957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgPreVoteResp from fc85001aa37e7974 at term 2"}
	{"level":"info","ts":"2024-06-21T19:07:58.811971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became candidate at term 3"}
	{"level":"info","ts":"2024-06-21T19:07:58.811976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgVoteResp from fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-06-21T19:07:58.811984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became leader at term 3"}
	{"level":"info","ts":"2024-06-21T19:07:58.811995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fc85001aa37e7974 elected leader fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-06-21T19:07:58.817432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:07:58.819352Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.146:2379"}
	{"level":"info","ts":"2024-06-21T19:07:58.817389Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fc85001aa37e7974","local-member-attributes":"{Name:multinode-851952 ClientURLs:[https://192.168.39.146:2379]}","request-path":"/0/members/fc85001aa37e7974/attributes","cluster-id":"25c4f0770a3181de","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T19:07:58.820064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:07:58.821576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T19:07:58.823227Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T19:07:58.823256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T19:09:09.819463Z","caller":"traceutil/trace.go:171","msg":"trace[1701788756] linearizableReadLoop","detail":"{readStateIndex:1188; appliedIndex:1187; }","duration":"117.310687ms","start":"2024-06-21T19:09:09.702116Z","end":"2024-06-21T19:09:09.819427Z","steps":["trace[1701788756] 'read index received'  (duration: 117.195271ms)","trace[1701788756] 'applied index is now lower than readState.Index'  (duration: 112.298µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T19:09:09.819589Z","caller":"traceutil/trace.go:171","msg":"trace[1805582098] transaction","detail":"{read_only:false; response_revision:1083; number_of_response:1; }","duration":"208.970711ms","start":"2024-06-21T19:09:09.61061Z","end":"2024-06-21T19:09:09.819581Z","steps":["trace[1805582098] 'process raft request'  (duration: 208.699307ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:09:09.82001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.795254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-06-21T19:09:09.820093Z","caller":"traceutil/trace.go:171","msg":"trace[912816120] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1083; }","duration":"117.988767ms","start":"2024-06-21T19:09:09.702092Z","end":"2024-06-21T19:09:09.820081Z","steps":["trace[912816120] 'agreement among raft nodes before linearized reading'  (duration: 117.700337ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:11:40 up 10 min,  0 users,  load average: 0.08, 0.22, 0.12
	Linux multinode-851952 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [36ce441ec2d19ca6ea23c289892f5a6e9e89696807088fc5a1cbb22c4c594f83] <==
	I0621 19:05:34.289027       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:05:44.301932       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:05:44.302032       1 main.go:227] handling current node
	I0621 19:05:44.302065       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:05:44.302089       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:05:44.302274       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:05:44.302305       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:05:54.309768       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:05:54.309950       1 main.go:227] handling current node
	I0621 19:05:54.309980       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:05:54.309989       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:05:54.310226       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:05:54.310244       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:06:04.316444       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:06:04.316498       1 main.go:227] handling current node
	I0621 19:06:04.316517       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:06:04.316522       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:06:04.316663       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:06:04.316681       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	I0621 19:06:14.329700       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:06:14.329818       1 main.go:227] handling current node
	I0621 19:06:14.329843       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:06:14.329865       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:06:14.330002       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0621 19:06:14.330022       1 main.go:250] Node multinode-851952-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e6c4b975ffa0bef3cdb48bc25f7eeab1294213df3e8d6a05c2e892207c0dc173] <==
	I0621 19:10:32.741703       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:10:42.754863       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:10:42.754900       1 main.go:227] handling current node
	I0621 19:10:42.754910       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:10:42.754916       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:10:52.762195       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:10:52.762225       1 main.go:227] handling current node
	I0621 19:10:52.762236       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:10:52.762241       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:11:02.766822       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:11:02.767040       1 main.go:227] handling current node
	I0621 19:11:02.767096       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:11:02.767121       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:11:12.778847       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:11:12.778963       1 main.go:227] handling current node
	I0621 19:11:12.778987       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:11:12.779006       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:11:22.790216       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:11:22.790346       1 main.go:227] handling current node
	I0621 19:11:22.790385       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:11:22.790404       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	I0621 19:11:32.795381       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0621 19:11:32.795520       1 main.go:227] handling current node
	I0621 19:11:32.795559       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0621 19:11:32.795584       1 main.go:250] Node multinode-851952-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [40087081e25d8085a666328a29561a84b540fe152452e7091cefd1db700e8acd] <==
	W0621 19:06:21.112937       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.113014       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.113045       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114112       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114684       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114743       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114782       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.114868       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115059       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115126       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115225       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115281       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115326       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115374       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115464       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115515       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115587       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115650       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115703       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115755       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115805       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.115924       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.116062       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.116091       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0621 19:06:21.116870       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [48fda169ce7646008e0341848682e03953afca18591e5318433acb9c645b3d49] <==
	I0621 19:08:00.112728       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0621 19:08:00.116191       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0621 19:08:00.116268       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0621 19:08:00.116307       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0621 19:08:00.114715       1 shared_informer.go:320] Caches are synced for configmaps
	I0621 19:08:00.122726       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0621 19:08:00.122841       1 aggregator.go:165] initial CRD sync complete...
	I0621 19:08:00.122923       1 autoregister_controller.go:141] Starting autoregister controller
	I0621 19:08:00.122950       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0621 19:08:00.123015       1 cache.go:39] Caches are synced for autoregister controller
	I0621 19:08:00.123213       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0621 19:08:00.114879       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0621 19:08:00.130785       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0621 19:08:00.156579       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 19:08:00.171617       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 19:08:00.171647       1 policy_source.go:224] refreshing policies
	I0621 19:08:00.213657       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 19:08:01.029786       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 19:08:02.416606       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 19:08:02.557705       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 19:08:02.571132       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 19:08:02.662942       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 19:08:02.670869       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 19:08:12.739919       1 controller.go:615] quota admission added evaluator for: endpoints
	I0621 19:08:12.750372       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [19e6c1b76c6742180378fd84a02cf3d13dc8f538fd4759f90984ca1b0cfbda0d] <==
	I0621 19:08:38.816564       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m02" podCIDRs=["10.244.1.0/24"]
	I0621 19:08:39.702648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.627µs"
	I0621 19:08:39.712374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.506µs"
	I0621 19:08:39.721473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.259µs"
	I0621 19:08:39.764339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.426µs"
	I0621 19:08:39.771999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.309µs"
	I0621 19:08:39.776505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.021µs"
	I0621 19:08:43.773455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.901µs"
	I0621 19:08:46.407126       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:08:46.424294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.206µs"
	I0621 19:08:46.444623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.426µs"
	I0621 19:08:50.289208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.091246ms"
	I0621 19:08:50.289330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.057µs"
	I0621 19:09:04.440009       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:09:05.388953       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:09:05.390218       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851952-m03\" does not exist"
	I0621 19:09:05.411119       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m03" podCIDRs=["10.244.2.0/24"]
	I0621 19:09:14.210867       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:09:19.362393       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:10:02.969717       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.385444ms"
	I0621 19:10:02.971450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.758µs"
	I0621 19:10:12.741204       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2jbqx"
	I0621 19:10:12.765547       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2jbqx"
	I0621 19:10:12.765588       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-wmc6k"
	I0621 19:10:12.789436       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-wmc6k"
	
	
	==> kube-controller-manager [736b6d52184414f45058085f602c2205184a11654872f5c8b09b8379789a201c] <==
	I0621 19:02:47.633653       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851952-m02\" does not exist"
	I0621 19:02:47.657061       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m02" podCIDRs=["10.244.1.0/24"]
	I0621 19:02:51.302256       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-851952-m02"
	I0621 19:02:56.957928       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:02:59.104740       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.542777ms"
	I0621 19:02:59.139867       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.063021ms"
	I0621 19:02:59.165808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.886072ms"
	I0621 19:02:59.166035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.929µs"
	I0621 19:03:02.553788       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.273486ms"
	I0621 19:03:02.554612       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.834µs"
	I0621 19:03:03.156457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.860829ms"
	I0621 19:03:03.156719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.15µs"
	I0621 19:03:29.612625       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851952-m03\" does not exist"
	I0621 19:03:29.612685       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:03:29.653222       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m03" podCIDRs=["10.244.2.0/24"]
	I0621 19:03:31.321026       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-851952-m03"
	I0621 19:03:39.022422       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:04:07.095940       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:04:08.041739       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:04:08.041858       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-851952-m03\" does not exist"
	I0621 19:04:08.056936       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-851952-m03" podCIDRs=["10.244.3.0/24"]
	I0621 19:04:15.498130       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m02"
	I0621 19:05:01.370908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-851952-m03"
	I0621 19:05:01.433019       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.714885ms"
	I0621 19:05:01.433515       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.684µs"
	
	
	==> kube-proxy [9da10767b93f9fd673b0149bec75fb836426f92a6e05b0dd34b0e7b07b3575b2] <==
	I0621 19:02:12.725614       1 server_linux.go:69] "Using iptables proxy"
	I0621 19:02:12.753321       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0621 19:02:12.849454       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 19:02:12.849486       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 19:02:12.849505       1 server_linux.go:165] "Using iptables Proxier"
	I0621 19:02:12.856367       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 19:02:12.856666       1 server.go:872] "Version info" version="v1.30.2"
	I0621 19:02:12.856680       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:02:12.860220       1 config.go:192] "Starting service config controller"
	I0621 19:02:12.860236       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 19:02:12.860264       1 config.go:101] "Starting endpoint slice config controller"
	I0621 19:02:12.860267       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 19:02:12.860730       1 config.go:319] "Starting node config controller"
	I0621 19:02:12.860736       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 19:02:12.961594       1 shared_informer.go:320] Caches are synced for node config
	I0621 19:02:12.961622       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0621 19:02:12.961613       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [c77c4e18ef1f9562589ad7ac29c7ba5f0f96004e260278d0c12d931432215302] <==
	I0621 19:08:01.824748       1 server_linux.go:69] "Using iptables proxy"
	I0621 19:08:01.854294       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0621 19:08:01.963970       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 19:08:01.964006       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 19:08:01.964023       1 server_linux.go:165] "Using iptables Proxier"
	I0621 19:08:01.969347       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 19:08:01.969592       1 server.go:872] "Version info" version="v1.30.2"
	I0621 19:08:01.969624       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:08:01.980289       1 config.go:192] "Starting service config controller"
	I0621 19:08:01.980331       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 19:08:01.980370       1 config.go:101] "Starting endpoint slice config controller"
	I0621 19:08:01.980375       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 19:08:01.981042       1 config.go:319] "Starting node config controller"
	I0621 19:08:01.981068       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 19:08:02.082252       1 shared_informer.go:320] Caches are synced for node config
	I0621 19:08:02.082286       1 shared_informer.go:320] Caches are synced for service config
	I0621 19:08:02.082338       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1932382e2a0188fb72b28909ac83ee13bd52cc5f6e016e8ffd77d1e3a08a85a2] <==
	I0621 19:07:57.730107       1 serving.go:380] Generated self-signed cert in-memory
	I0621 19:08:00.136667       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0621 19:08:00.136702       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:08:00.140383       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0621 19:08:00.140596       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0621 19:08:00.140638       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0621 19:08:00.140687       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0621 19:08:00.142275       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0621 19:08:00.144260       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 19:08:00.144303       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0621 19:08:00.144310       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0621 19:08:00.240874       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0621 19:08:00.246225       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 19:08:00.246281       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [77ba488fac51d9683c16065c66b9a57f223578131eb37d5b3b8f4ee54ab59fd1] <==
	E0621 19:01:56.057538       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0621 19:01:56.057615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0621 19:01:56.057647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0621 19:01:56.059844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 19:01:56.059929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 19:01:56.880747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0621 19:01:56.880861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0621 19:01:56.912295       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0621 19:01:56.912385       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 19:01:56.926259       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0621 19:01:56.926308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0621 19:01:57.058063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0621 19:01:57.058114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0621 19:01:57.124344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0621 19:01:57.124395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0621 19:01:57.126975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0621 19:01:57.127013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0621 19:01:57.145571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0621 19:01:57.145611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0621 19:01:57.200954       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0621 19:01:57.201002       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0621 19:01:57.233122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0621 19:01:57.233212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0621 19:01:58.650757       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0621 19:06:21.089254       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.952741    3020 topology_manager.go:215] "Topology Admit Handler" podUID="e789867a-771b-4879-b010-02d710e5742a" podNamespace="kube-system" podName="storage-provisioner"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.952852    3020 topology_manager.go:215] "Topology Admit Handler" podUID="fb5aa3b9-e31c-486b-bc01-8faea6986d7c" podNamespace="default" podName="busybox-fc5497c4f-rwq2d"
	Jun 21 19:08:00 multinode-851952 kubelet[3020]: I0621 19:08:00.965604    3020 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.023630    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9727b60b-2689-4f26-9276-88efe3296374-xtables-lock\") pod \"kube-proxy-lcgp6\" (UID: \"9727b60b-2689-4f26-9276-88efe3296374\") " pod="kube-system/kube-proxy-lcgp6"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.023860    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9727b60b-2689-4f26-9276-88efe3296374-lib-modules\") pod \"kube-proxy-lcgp6\" (UID: \"9727b60b-2689-4f26-9276-88efe3296374\") " pod="kube-system/kube-proxy-lcgp6"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.024001    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/68820bdc-6391-4f97-ab90-8d100de2f0f1-cni-cfg\") pod \"kindnet-mrcqf\" (UID: \"68820bdc-6391-4f97-ab90-8d100de2f0f1\") " pod="kube-system/kindnet-mrcqf"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.024169    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68820bdc-6391-4f97-ab90-8d100de2f0f1-lib-modules\") pod \"kindnet-mrcqf\" (UID: \"68820bdc-6391-4f97-ab90-8d100de2f0f1\") " pod="kube-system/kindnet-mrcqf"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.024266    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68820bdc-6391-4f97-ab90-8d100de2f0f1-xtables-lock\") pod \"kindnet-mrcqf\" (UID: \"68820bdc-6391-4f97-ab90-8d100de2f0f1\") " pod="kube-system/kindnet-mrcqf"
	Jun 21 19:08:01 multinode-851952 kubelet[3020]: I0621 19:08:01.024350    3020 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e789867a-771b-4879-b010-02d710e5742a-tmp\") pod \"storage-provisioner\" (UID: \"e789867a-771b-4879-b010-02d710e5742a\") " pod="kube-system/storage-provisioner"
	Jun 21 19:08:09 multinode-851952 kubelet[3020]: I0621 19:08:09.388489    3020 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 21 19:08:56 multinode-851952 kubelet[3020]: E0621 19:08:56.010463    3020 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 19:08:56 multinode-851952 kubelet[3020]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 19:08:56 multinode-851952 kubelet[3020]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 19:08:56 multinode-851952 kubelet[3020]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 19:08:56 multinode-851952 kubelet[3020]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 19:09:56 multinode-851952 kubelet[3020]: E0621 19:09:56.011398    3020 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 19:09:56 multinode-851952 kubelet[3020]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 19:09:56 multinode-851952 kubelet[3020]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 19:09:56 multinode-851952 kubelet[3020]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 19:09:56 multinode-851952 kubelet[3020]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 21 19:10:56 multinode-851952 kubelet[3020]: E0621 19:10:56.006704    3020 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 21 19:10:56 multinode-851952 kubelet[3020]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 21 19:10:56 multinode-851952 kubelet[3020]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 21 19:10:56 multinode-851952 kubelet[3020]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 21 19:10:56 multinode-851952 kubelet[3020]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0621 19:11:39.902361   48660 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19112-8111/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-851952 -n multinode-851952
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-851952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.14s)

                                                
                                    
x
+
TestPreload (352.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-509272 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0621 19:15:54.861908   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-509272 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m29.448038761s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-509272 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-509272 image pull gcr.io/k8s-minikube/busybox: (2.7302573s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-509272
E0621 19:20:37.913440   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-509272: exit status 82 (2m0.466565741s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-509272"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-509272 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-06-21 19:20:47.265788631 +0000 UTC m=+5981.397942799
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-509272 -n test-preload-509272
E0621 19:20:54.862224   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-509272 -n test-preload-509272: exit status 3 (18.583608s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0621 19:21:05.846139   52200 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0621 19:21:05.846158   52200 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-509272" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-509272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-509272
--- FAIL: TestPreload (352.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (382.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-371786 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-371786 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m57.735511324s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-371786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-371786" primary control-plane node in "kubernetes-upgrade-371786" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 19:26:48.667419   58916 out.go:291] Setting OutFile to fd 1 ...
	I0621 19:26:48.667634   58916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:26:48.667670   58916 out.go:304] Setting ErrFile to fd 2...
	I0621 19:26:48.667706   58916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:26:48.668737   58916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 19:26:48.669623   58916 out.go:298] Setting JSON to false
	I0621 19:26:48.670701   58916 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7707,"bootTime":1718990302,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 19:26:48.670799   58916 start.go:139] virtualization: kvm guest
	I0621 19:26:48.673122   58916 out.go:177] * [kubernetes-upgrade-371786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 19:26:48.674406   58916 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 19:26:48.674430   58916 notify.go:220] Checking for updates...
	I0621 19:26:48.677110   58916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 19:26:48.678804   58916 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 19:26:48.680195   58916 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 19:26:48.681596   58916 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 19:26:48.683074   58916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 19:26:48.684987   58916 config.go:182] Loaded profile config "NoKubernetes-262372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0621 19:26:48.685133   58916 config.go:182] Loaded profile config "cert-expiration-843358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:26:48.685293   58916 config.go:182] Loaded profile config "pause-709611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:26:48.685429   58916 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 19:26:48.722313   58916 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 19:26:48.723697   58916 start.go:297] selected driver: kvm2
	I0621 19:26:48.723724   58916 start.go:901] validating driver "kvm2" against <nil>
	I0621 19:26:48.723741   58916 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 19:26:48.724815   58916 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:26:48.724930   58916 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 19:26:48.740428   58916 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 19:26:48.740495   58916 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 19:26:48.740797   58916 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0621 19:26:48.740871   58916 cni.go:84] Creating CNI manager for ""
	I0621 19:26:48.740887   58916 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 19:26:48.740902   58916 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0621 19:26:48.740982   58916 start.go:340] cluster config:
	{Name:kubernetes-upgrade-371786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-371786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:26:48.741105   58916 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:26:48.743787   58916 out.go:177] * Starting "kubernetes-upgrade-371786" primary control-plane node in "kubernetes-upgrade-371786" cluster
	I0621 19:26:48.745113   58916 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0621 19:26:48.745160   58916 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0621 19:26:48.745170   58916 cache.go:56] Caching tarball of preloaded images
	I0621 19:26:48.745269   58916 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 19:26:48.745284   58916 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0621 19:26:48.745406   58916 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/config.json ...
	I0621 19:26:48.745431   58916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/config.json: {Name:mk2570a07286756a50cd1e215747eb5dcb12e39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:26:48.745624   58916 start.go:360] acquireMachinesLock for kubernetes-upgrade-371786: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 19:27:17.210564   58916 start.go:364] duration metric: took 28.464894795s to acquireMachinesLock for "kubernetes-upgrade-371786"
	I0621 19:27:17.210634   58916 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-371786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-371786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 19:27:17.210752   58916 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 19:27:17.212835   58916 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0621 19:27:17.213029   58916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:27:17.213085   58916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:27:17.229278   58916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38347
	I0621 19:27:17.229668   58916 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:27:17.230259   58916 main.go:141] libmachine: Using API Version  1
	I0621 19:27:17.230280   58916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:27:17.230666   58916 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:27:17.230852   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetMachineName
	I0621 19:27:17.231017   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:27:17.231184   58916 start.go:159] libmachine.API.Create for "kubernetes-upgrade-371786" (driver="kvm2")
	I0621 19:27:17.231213   58916 client.go:168] LocalClient.Create starting
	I0621 19:27:17.231255   58916 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem
	I0621 19:27:17.231288   58916 main.go:141] libmachine: Decoding PEM data...
	I0621 19:27:17.231301   58916 main.go:141] libmachine: Parsing certificate...
	I0621 19:27:17.231352   58916 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem
	I0621 19:27:17.231372   58916 main.go:141] libmachine: Decoding PEM data...
	I0621 19:27:17.231393   58916 main.go:141] libmachine: Parsing certificate...
	I0621 19:27:17.231408   58916 main.go:141] libmachine: Running pre-create checks...
	I0621 19:27:17.231418   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .PreCreateCheck
	I0621 19:27:17.231813   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetConfigRaw
	I0621 19:27:17.232210   58916 main.go:141] libmachine: Creating machine...
	I0621 19:27:17.232223   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .Create
	I0621 19:27:17.232354   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Creating KVM machine...
	I0621 19:27:17.233530   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found existing default KVM network
	I0621 19:27:17.234850   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:17.234681   59316 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8f:af:db} reservation:<nil>}
	I0621 19:27:17.235697   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:17.235621   59316 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002626f0}
	I0621 19:27:17.235724   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | created network xml: 
	I0621 19:27:17.235738   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | <network>
	I0621 19:27:17.235747   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG |   <name>mk-kubernetes-upgrade-371786</name>
	I0621 19:27:17.235764   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG |   <dns enable='no'/>
	I0621 19:27:17.235772   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG |   
	I0621 19:27:17.235782   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0621 19:27:17.235789   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG |     <dhcp>
	I0621 19:27:17.235803   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0621 19:27:17.235811   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG |     </dhcp>
	I0621 19:27:17.235819   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG |   </ip>
	I0621 19:27:17.235824   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG |   
	I0621 19:27:17.235829   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | </network>
	I0621 19:27:17.235837   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | 
	I0621 19:27:17.241141   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | trying to create private KVM network mk-kubernetes-upgrade-371786 192.168.50.0/24...
	I0621 19:27:17.310354   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | private KVM network mk-kubernetes-upgrade-371786 192.168.50.0/24 created
	I0621 19:27:17.310385   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Setting up store path in /home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786 ...
	I0621 19:27:17.310394   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:17.310323   59316 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 19:27:17.310608   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Building disk image from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 19:27:17.310642   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Downloading /home/jenkins/minikube-integration/19112-8111/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso...
	I0621 19:27:17.544404   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:17.544280   59316 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/id_rsa...
	I0621 19:27:17.666203   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:17.666058   59316 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/kubernetes-upgrade-371786.rawdisk...
	I0621 19:27:17.666236   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Writing magic tar header
	I0621 19:27:17.666255   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Writing SSH key tar header
	I0621 19:27:17.666272   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:17.666166   59316 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786 ...
	I0621 19:27:17.666304   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786
	I0621 19:27:17.666323   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786 (perms=drwx------)
	I0621 19:27:17.666337   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube/machines
	I0621 19:27:17.666362   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube/machines (perms=drwxr-xr-x)
	I0621 19:27:17.666376   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 19:27:17.666393   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19112-8111
	I0621 19:27:17.666409   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0621 19:27:17.666422   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Checking permissions on dir: /home/jenkins
	I0621 19:27:17.666435   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Checking permissions on dir: /home
	I0621 19:27:17.666464   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111/.minikube (perms=drwxr-xr-x)
	I0621 19:27:17.666519   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Skipping /home - not owner
	I0621 19:27:17.666556   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Setting executable bit set on /home/jenkins/minikube-integration/19112-8111 (perms=drwxrwxr-x)
	I0621 19:27:17.666636   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0621 19:27:17.666672   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0621 19:27:17.666690   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Creating domain...
	I0621 19:27:17.667486   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) define libvirt domain using xml: 
	I0621 19:27:17.667507   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) <domain type='kvm'>
	I0621 19:27:17.667523   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   <name>kubernetes-upgrade-371786</name>
	I0621 19:27:17.667531   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   <memory unit='MiB'>2200</memory>
	I0621 19:27:17.667542   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   <vcpu>2</vcpu>
	I0621 19:27:17.667554   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   <features>
	I0621 19:27:17.667568   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <acpi/>
	I0621 19:27:17.667579   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <apic/>
	I0621 19:27:17.667590   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <pae/>
	I0621 19:27:17.667601   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     
	I0621 19:27:17.667613   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   </features>
	I0621 19:27:17.667627   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   <cpu mode='host-passthrough'>
	I0621 19:27:17.667637   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   
	I0621 19:27:17.667646   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   </cpu>
	I0621 19:27:17.667657   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   <os>
	I0621 19:27:17.667668   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <type>hvm</type>
	I0621 19:27:17.667684   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <boot dev='cdrom'/>
	I0621 19:27:17.667696   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <boot dev='hd'/>
	I0621 19:27:17.667708   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <bootmenu enable='no'/>
	I0621 19:27:17.667721   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   </os>
	I0621 19:27:17.667732   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   <devices>
	I0621 19:27:17.667743   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <disk type='file' device='cdrom'>
	I0621 19:27:17.667764   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/boot2docker.iso'/>
	I0621 19:27:17.667777   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <target dev='hdc' bus='scsi'/>
	I0621 19:27:17.667788   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <readonly/>
	I0621 19:27:17.667798   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     </disk>
	I0621 19:27:17.667814   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <disk type='file' device='disk'>
	I0621 19:27:17.667846   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0621 19:27:17.667874   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <source file='/home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/kubernetes-upgrade-371786.rawdisk'/>
	I0621 19:27:17.667888   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <target dev='hda' bus='virtio'/>
	I0621 19:27:17.667897   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     </disk>
	I0621 19:27:17.667909   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <interface type='network'>
	I0621 19:27:17.667923   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <source network='mk-kubernetes-upgrade-371786'/>
	I0621 19:27:17.667936   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <model type='virtio'/>
	I0621 19:27:17.667946   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     </interface>
	I0621 19:27:17.667965   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <interface type='network'>
	I0621 19:27:17.667978   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <source network='default'/>
	I0621 19:27:17.667986   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <model type='virtio'/>
	I0621 19:27:17.667994   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     </interface>
	I0621 19:27:17.668006   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <serial type='pty'>
	I0621 19:27:17.668019   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <target port='0'/>
	I0621 19:27:17.668041   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     </serial>
	I0621 19:27:17.668052   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <console type='pty'>
	I0621 19:27:17.668065   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <target type='serial' port='0'/>
	I0621 19:27:17.668073   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     </console>
	I0621 19:27:17.668081   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     <rng model='virtio'>
	I0621 19:27:17.668093   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)       <backend model='random'>/dev/random</backend>
	I0621 19:27:17.668106   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     </rng>
	I0621 19:27:17.668117   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     
	I0621 19:27:17.668136   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)     
	I0621 19:27:17.668157   58916 main.go:141] libmachine: (kubernetes-upgrade-371786)   </devices>
	I0621 19:27:17.668175   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) </domain>
	I0621 19:27:17.668184   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) 
	I0621 19:27:17.673527   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:aa:79:29 in network default
	I0621 19:27:17.674134   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Ensuring networks are active...
	I0621 19:27:17.674161   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:17.674814   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Ensuring network default is active
	I0621 19:27:17.675150   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Ensuring network mk-kubernetes-upgrade-371786 is active
	I0621 19:27:17.675611   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Getting domain xml...
	I0621 19:27:17.676389   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Creating domain...
	I0621 19:27:18.989703   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Waiting to get IP...
	I0621 19:27:18.992030   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:18.992683   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:18.992714   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:18.992639   59316 retry.go:31] will retry after 201.714675ms: waiting for machine to come up
	I0621 19:27:19.195815   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:19.196363   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:19.196388   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:19.196310   59316 retry.go:31] will retry after 321.062633ms: waiting for machine to come up
	I0621 19:27:19.519006   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:19.519574   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:19.519607   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:19.519519   59316 retry.go:31] will retry after 306.901822ms: waiting for machine to come up
	I0621 19:27:19.828267   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:19.828844   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:19.828879   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:19.828761   59316 retry.go:31] will retry after 526.344799ms: waiting for machine to come up
	I0621 19:27:20.356813   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:20.357333   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:20.357363   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:20.357283   59316 retry.go:31] will retry after 758.023559ms: waiting for machine to come up
	I0621 19:27:21.116922   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:21.117492   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:21.117520   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:21.117422   59316 retry.go:31] will retry after 767.594694ms: waiting for machine to come up
	I0621 19:27:21.886214   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:21.886781   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:21.886814   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:21.886722   59316 retry.go:31] will retry after 716.183826ms: waiting for machine to come up
	I0621 19:27:22.604814   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:22.605280   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:22.605307   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:22.605245   59316 retry.go:31] will retry after 934.276031ms: waiting for machine to come up
	I0621 19:27:23.540839   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:23.541278   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:23.541309   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:23.541223   59316 retry.go:31] will retry after 1.234384578s: waiting for machine to come up
	I0621 19:27:24.777895   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:24.778482   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:24.778516   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:24.778445   59316 retry.go:31] will retry after 2.071072888s: waiting for machine to come up
	I0621 19:27:26.850998   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:26.851554   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:26.851585   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:26.851491   59316 retry.go:31] will retry after 1.922756982s: waiting for machine to come up
	I0621 19:27:28.775805   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:28.776399   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:28.776429   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:28.776340   59316 retry.go:31] will retry after 2.844705136s: waiting for machine to come up
	I0621 19:27:31.623530   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:31.624029   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:31.624086   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:31.623999   59316 retry.go:31] will retry after 3.406254753s: waiting for machine to come up
	I0621 19:27:35.031469   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:35.031915   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find current IP address of domain kubernetes-upgrade-371786 in network mk-kubernetes-upgrade-371786
	I0621 19:27:35.031948   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | I0621 19:27:35.031872   59316 retry.go:31] will retry after 4.245732087s: waiting for machine to come up
	I0621 19:27:39.280245   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.280765   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Found IP for machine: 192.168.50.198
	I0621 19:27:39.280797   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has current primary IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.280810   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Reserving static IP address...
	I0621 19:27:39.281160   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-371786", mac: "52:54:00:00:60:26", ip: "192.168.50.198"} in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.361673   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Getting to WaitForSSH function...
	I0621 19:27:39.361705   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Reserved static IP address: 192.168.50.198
	I0621 19:27:39.361720   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Waiting for SSH to be available...
	I0621 19:27:39.364542   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.365049   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:60:26}
	I0621 19:27:39.365080   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.365252   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Using SSH client type: external
	I0621 19:27:39.365281   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Using SSH private key: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/id_rsa (-rw-------)
	I0621 19:27:39.365321   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0621 19:27:39.365336   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | About to run SSH command:
	I0621 19:27:39.365353   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | exit 0
	I0621 19:27:39.493894   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | SSH cmd err, output: <nil>: 
	I0621 19:27:39.494232   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) KVM machine creation complete!
	I0621 19:27:39.494569   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetConfigRaw
	I0621 19:27:39.495140   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:27:39.495350   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:27:39.495501   58916 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0621 19:27:39.495516   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetState
	I0621 19:27:39.496911   58916 main.go:141] libmachine: Detecting operating system of created instance...
	I0621 19:27:39.496928   58916 main.go:141] libmachine: Waiting for SSH to be available...
	I0621 19:27:39.496936   58916 main.go:141] libmachine: Getting to WaitForSSH function...
	I0621 19:27:39.496993   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:39.499731   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.500139   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:39.500172   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.500292   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:39.500482   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:39.500640   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:39.500821   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:39.501014   58916 main.go:141] libmachine: Using SSH client type: native
	I0621 19:27:39.501212   58916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0621 19:27:39.501225   58916 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0621 19:27:39.608875   58916 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 19:27:39.608909   58916 main.go:141] libmachine: Detecting the provisioner...
	I0621 19:27:39.608917   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:39.611950   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.612342   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:39.612380   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.612585   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:39.612803   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:39.612991   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:39.613147   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:39.613330   58916 main.go:141] libmachine: Using SSH client type: native
	I0621 19:27:39.613535   58916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0621 19:27:39.613552   58916 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0621 19:27:39.722718   58916 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0621 19:27:39.722786   58916 main.go:141] libmachine: found compatible host: buildroot
	I0621 19:27:39.722796   58916 main.go:141] libmachine: Provisioning with buildroot...
	I0621 19:27:39.722808   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetMachineName
	I0621 19:27:39.723068   58916 buildroot.go:166] provisioning hostname "kubernetes-upgrade-371786"
	I0621 19:27:39.723097   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetMachineName
	I0621 19:27:39.723288   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:39.725875   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.726294   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:39.726326   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.726417   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:39.726592   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:39.726754   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:39.726936   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:39.727100   58916 main.go:141] libmachine: Using SSH client type: native
	I0621 19:27:39.727331   58916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0621 19:27:39.727348   58916 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-371786 && echo "kubernetes-upgrade-371786" | sudo tee /etc/hostname
	I0621 19:27:39.851406   58916 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-371786
	
	I0621 19:27:39.851441   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:39.854353   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.854796   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:39.854831   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.855043   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:39.855252   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:39.855599   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:39.855808   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:39.856015   58916 main.go:141] libmachine: Using SSH client type: native
	I0621 19:27:39.856208   58916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0621 19:27:39.856226   58916 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-371786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-371786/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-371786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 19:27:39.977940   58916 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 19:27:39.977969   58916 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 19:27:39.978010   58916 buildroot.go:174] setting up certificates
	I0621 19:27:39.978022   58916 provision.go:84] configureAuth start
	I0621 19:27:39.978033   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetMachineName
	I0621 19:27:39.978307   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetIP
	I0621 19:27:39.980760   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.981204   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:39.981242   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.981441   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:39.983600   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.983927   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:39.983949   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:39.984087   58916 provision.go:143] copyHostCerts
	I0621 19:27:39.984153   58916 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 19:27:39.984173   58916 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 19:27:39.984258   58916 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 19:27:39.984381   58916 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 19:27:39.984391   58916 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 19:27:39.984439   58916 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 19:27:39.984526   58916 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 19:27:39.984542   58916 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 19:27:39.984571   58916 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 19:27:39.984651   58916 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-371786 san=[127.0.0.1 192.168.50.198 kubernetes-upgrade-371786 localhost minikube]
	I0621 19:27:40.340372   58916 provision.go:177] copyRemoteCerts
	I0621 19:27:40.340446   58916 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 19:27:40.340477   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:40.343726   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.344093   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:40.344125   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.344338   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:40.344566   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:40.344746   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:40.344895   58916 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/id_rsa Username:docker}
	I0621 19:27:40.431629   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0621 19:27:40.455662   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 19:27:40.478525   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 19:27:40.501197   58916 provision.go:87] duration metric: took 523.162474ms to configureAuth
	I0621 19:27:40.501228   58916 buildroot.go:189] setting minikube options for container-runtime
	I0621 19:27:40.501412   58916 config.go:182] Loaded profile config "kubernetes-upgrade-371786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0621 19:27:40.501505   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:40.504018   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.504343   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:40.504370   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.504507   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:40.504728   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:40.504888   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:40.505053   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:40.505197   58916 main.go:141] libmachine: Using SSH client type: native
	I0621 19:27:40.505420   58916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0621 19:27:40.505437   58916 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 19:27:40.774526   58916 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 19:27:40.774558   58916 main.go:141] libmachine: Checking connection to Docker...
	I0621 19:27:40.774571   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetURL
	I0621 19:27:40.775952   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Using libvirt version 6000000
	I0621 19:27:40.778220   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.778560   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:40.778617   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.778808   58916 main.go:141] libmachine: Docker is up and running!
	I0621 19:27:40.778824   58916 main.go:141] libmachine: Reticulating splines...
	I0621 19:27:40.778834   58916 client.go:171] duration metric: took 23.54760795s to LocalClient.Create
	I0621 19:27:40.778866   58916 start.go:167] duration metric: took 23.547682914s to libmachine.API.Create "kubernetes-upgrade-371786"
	I0621 19:27:40.778879   58916 start.go:293] postStartSetup for "kubernetes-upgrade-371786" (driver="kvm2")
	I0621 19:27:40.778897   58916 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 19:27:40.778927   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:27:40.779206   58916 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 19:27:40.780452   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:40.782803   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.783178   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:40.783225   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.783411   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:40.783626   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:40.783819   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:40.783985   58916 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/id_rsa Username:docker}
	I0621 19:27:40.868471   58916 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 19:27:40.872865   58916 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 19:27:40.872891   58916 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 19:27:40.872965   58916 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 19:27:40.873078   58916 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 19:27:40.873189   58916 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 19:27:40.882846   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 19:27:40.907405   58916 start.go:296] duration metric: took 128.508227ms for postStartSetup
	I0621 19:27:40.907458   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetConfigRaw
	I0621 19:27:40.908078   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetIP
	I0621 19:27:40.910886   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.911197   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:40.911251   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.911408   58916 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/config.json ...
	I0621 19:27:40.911616   58916 start.go:128] duration metric: took 23.700852135s to createHost
	I0621 19:27:40.911641   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:40.914180   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.914562   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:40.914603   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:40.914690   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:40.914905   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:40.915122   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:40.915319   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:40.915495   58916 main.go:141] libmachine: Using SSH client type: native
	I0621 19:27:40.915702   58916 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0621 19:27:40.915715   58916 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0621 19:27:41.026316   58916 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718998060.993433002
	
	I0621 19:27:41.026339   58916 fix.go:216] guest clock: 1718998060.993433002
	I0621 19:27:41.026348   58916 fix.go:229] Guest: 2024-06-21 19:27:40.993433002 +0000 UTC Remote: 2024-06-21 19:27:40.911630759 +0000 UTC m=+52.282570501 (delta=81.802243ms)
	I0621 19:27:41.026395   58916 fix.go:200] guest clock delta is within tolerance: 81.802243ms
	I0621 19:27:41.026404   58916 start.go:83] releasing machines lock for "kubernetes-upgrade-371786", held for 23.815804445s
	I0621 19:27:41.026437   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:27:41.026764   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetIP
	I0621 19:27:41.029944   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:41.030333   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:41.030367   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:41.030545   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:27:41.031126   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:27:41.031333   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:27:41.031412   58916 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 19:27:41.031445   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:41.031549   58916 ssh_runner.go:195] Run: cat /version.json
	I0621 19:27:41.031571   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:27:41.034053   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:41.034234   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:41.034427   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:41.034463   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:41.034587   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:41.034610   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:41.034638   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:41.034850   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:41.034871   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:27:41.035038   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:27:41.035049   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:41.035197   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:27:41.035194   58916 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/id_rsa Username:docker}
	I0621 19:27:41.035355   58916 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/id_rsa Username:docker}
	I0621 19:27:41.119397   58916 ssh_runner.go:195] Run: systemctl --version
	I0621 19:27:41.158021   58916 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 19:27:41.319931   58916 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 19:27:41.325712   58916 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 19:27:41.325772   58916 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 19:27:41.342360   58916 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0621 19:27:41.342386   58916 start.go:494] detecting cgroup driver to use...
	I0621 19:27:41.342455   58916 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 19:27:41.362048   58916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 19:27:41.377624   58916 docker.go:217] disabling cri-docker service (if available) ...
	I0621 19:27:41.377695   58916 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 19:27:41.393044   58916 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 19:27:41.408477   58916 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 19:27:41.523690   58916 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 19:27:41.675785   58916 docker.go:233] disabling docker service ...
	I0621 19:27:41.675876   58916 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 19:27:41.690130   58916 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 19:27:41.702656   58916 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 19:27:41.864233   58916 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 19:27:42.006007   58916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 19:27:42.019875   58916 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 19:27:42.039082   58916 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0621 19:27:42.039150   58916 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:27:42.049776   58916 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 19:27:42.049869   58916 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:27:42.061960   58916 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:27:42.073036   58916 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:27:42.083839   58916 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 19:27:42.103799   58916 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 19:27:42.115583   58916 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0621 19:27:42.115651   58916 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0621 19:27:42.132201   58916 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 19:27:42.143555   58916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:27:42.289695   58916 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 19:27:42.442369   58916 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 19:27:42.442441   58916 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 19:27:42.448135   58916 start.go:562] Will wait 60s for crictl version
	I0621 19:27:42.448210   58916 ssh_runner.go:195] Run: which crictl
	I0621 19:27:42.452199   58916 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 19:27:42.495660   58916 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 19:27:42.495736   58916 ssh_runner.go:195] Run: crio --version
	I0621 19:27:42.526860   58916 ssh_runner.go:195] Run: crio --version
	I0621 19:27:42.558104   58916 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0621 19:27:42.559448   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetIP
	I0621 19:27:42.562483   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:42.562917   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:27:31 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:27:42.562961   58916 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:27:42.563193   58916 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0621 19:27:42.567528   58916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 19:27:42.581177   58916 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-371786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-371786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 19:27:42.581277   58916 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0621 19:27:42.581332   58916 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 19:27:42.618667   58916 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0621 19:27:42.618739   58916 ssh_runner.go:195] Run: which lz4
	I0621 19:27:42.623049   58916 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0621 19:27:42.627661   58916 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0621 19:27:42.627695   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0621 19:27:44.202483   58916 crio.go:462] duration metric: took 1.579470712s to copy over tarball
	I0621 19:27:44.202559   58916 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0621 19:27:46.980160   58916 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77756169s)
	I0621 19:27:46.980244   58916 crio.go:469] duration metric: took 2.777729725s to extract the tarball
	I0621 19:27:46.980259   58916 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0621 19:27:47.024234   58916 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 19:27:47.073222   58916 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0621 19:27:47.073307   58916 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0621 19:27:47.073391   58916 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0621 19:27:47.073412   58916 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0621 19:27:47.073428   58916 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0621 19:27:47.073443   58916 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0621 19:27:47.073468   58916 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0621 19:27:47.073520   58916 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0621 19:27:47.073399   58916 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 19:27:47.073445   58916 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0621 19:27:47.075299   58916 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0621 19:27:47.075341   58916 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0621 19:27:47.075301   58916 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0621 19:27:47.075317   58916 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0621 19:27:47.075371   58916 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0621 19:27:47.075338   58916 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 19:27:47.075641   58916 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0621 19:27:47.075729   58916 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0621 19:27:47.330725   58916 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0621 19:27:47.338692   58916 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0621 19:27:47.343138   58916 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0621 19:27:47.364131   58916 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0621 19:27:47.376851   58916 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0621 19:27:47.390414   58916 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0621 19:27:47.408054   58916 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0621 19:27:47.444925   58916 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0621 19:27:47.444971   58916 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0621 19:27:47.445020   58916 ssh_runner.go:195] Run: which crictl
	I0621 19:27:47.480168   58916 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0621 19:27:47.480254   58916 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0621 19:27:47.480324   58916 ssh_runner.go:195] Run: which crictl
	I0621 19:27:47.510999   58916 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0621 19:27:47.511051   58916 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0621 19:27:47.511100   58916 ssh_runner.go:195] Run: which crictl
	I0621 19:27:47.575098   58916 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0621 19:27:47.575158   58916 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0621 19:27:47.575219   58916 ssh_runner.go:195] Run: which crictl
	I0621 19:27:47.588700   58916 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0621 19:27:47.588752   58916 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0621 19:27:47.588806   58916 ssh_runner.go:195] Run: which crictl
	I0621 19:27:47.591300   58916 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0621 19:27:47.591345   58916 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0621 19:27:47.591391   58916 ssh_runner.go:195] Run: which crictl
	I0621 19:27:47.594150   58916 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0621 19:27:47.594192   58916 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0621 19:27:47.594241   58916 ssh_runner.go:195] Run: which crictl
	I0621 19:27:47.594318   58916 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0621 19:27:47.594380   58916 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0621 19:27:47.594445   58916 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0621 19:27:47.594523   58916 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0621 19:27:47.597302   58916 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0621 19:27:47.598578   58916 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0621 19:27:47.754688   58916 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0621 19:27:47.754709   58916 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0621 19:27:47.754709   58916 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0621 19:27:47.754768   58916 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0621 19:27:47.754906   58916 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0621 19:27:47.754973   58916 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0621 19:27:47.755039   58916 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0621 19:27:47.799641   58916 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0621 19:27:47.916152   58916 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 19:27:48.058504   58916 cache_images.go:92] duration metric: took 985.170563ms to LoadCachedImages
	W0621 19:27:48.058635   58916 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19112-8111/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19112-8111/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0621 19:27:48.058656   58916 kubeadm.go:928] updating node { 192.168.50.198 8443 v1.20.0 crio true true} ...
	I0621 19:27:48.058798   58916 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-371786 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-371786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 19:27:48.058893   58916 ssh_runner.go:195] Run: crio config
	I0621 19:27:48.120793   58916 cni.go:84] Creating CNI manager for ""
	I0621 19:27:48.120821   58916 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 19:27:48.120831   58916 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 19:27:48.120859   58916 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.198 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-371786 NodeName:kubernetes-upgrade-371786 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0621 19:27:48.121035   58916 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-371786"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 19:27:48.121116   58916 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0621 19:27:48.130681   58916 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 19:27:48.130762   58916 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0621 19:27:48.140886   58916 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0621 19:27:48.158724   58916 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 19:27:48.175370   58916 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0621 19:27:48.193853   58916 ssh_runner.go:195] Run: grep 192.168.50.198	control-plane.minikube.internal$ /etc/hosts
	I0621 19:27:48.197846   58916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0621 19:27:48.209987   58916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:27:48.344101   58916 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 19:27:48.361309   58916 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786 for IP: 192.168.50.198
	I0621 19:27:48.361335   58916 certs.go:194] generating shared ca certs ...
	I0621 19:27:48.361357   58916 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:27:48.361518   58916 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 19:27:48.361565   58916 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 19:27:48.361574   58916 certs.go:256] generating profile certs ...
	I0621 19:27:48.361629   58916 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/client.key
	I0621 19:27:48.361647   58916 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/client.crt with IP's: []
	I0621 19:27:48.516337   58916 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/client.crt ...
	I0621 19:27:48.516366   58916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/client.crt: {Name:mk1dcd21d74de71f246177e2ca2405ee0418d33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:27:48.516529   58916 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/client.key ...
	I0621 19:27:48.516542   58916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/client.key: {Name:mk012557ffac7ad44660088b95fe002a9c13b7fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:27:48.516616   58916 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.key.fcdce7e6
	I0621 19:27:48.516632   58916 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.crt.fcdce7e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.198]
	I0621 19:27:48.762444   58916 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.crt.fcdce7e6 ...
	I0621 19:27:48.762475   58916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.crt.fcdce7e6: {Name:mk10b900c1afda2678d16376cdf33a50c2a9c308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:27:48.762641   58916 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.key.fcdce7e6 ...
	I0621 19:27:48.762663   58916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.key.fcdce7e6: {Name:mk0d368acf755bf880ed307b9b470a7a97f4f91b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:27:48.762752   58916 certs.go:381] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.crt.fcdce7e6 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.crt
	I0621 19:27:48.762843   58916 certs.go:385] copying /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.key.fcdce7e6 -> /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.key
	I0621 19:27:48.762903   58916 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/proxy-client.key
	I0621 19:27:48.762919   58916 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/proxy-client.crt with IP's: []
	I0621 19:27:48.841416   58916 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/proxy-client.crt ...
	I0621 19:27:48.841464   58916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/proxy-client.crt: {Name:mk41048b3ae7be7471ee90d5ec420170d4e98913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:27:48.916595   58916 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/proxy-client.key ...
	I0621 19:27:48.916648   58916 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/proxy-client.key: {Name:mk892af2a21e1a274962db87eae8b18f68f9ae7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:27:48.916911   58916 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 19:27:48.916981   58916 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 19:27:48.917000   58916 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 19:27:48.917029   58916 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 19:27:48.917064   58916 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 19:27:48.917090   58916 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 19:27:48.917149   58916 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 19:27:48.918001   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 19:27:48.945099   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 19:27:48.968105   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 19:27:48.995565   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 19:27:49.024222   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0621 19:27:49.052037   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0621 19:27:49.080982   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 19:27:49.113094   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0621 19:27:49.143002   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 19:27:49.172376   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 19:27:49.200238   58916 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 19:27:49.223806   58916 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 19:27:49.251594   58916 ssh_runner.go:195] Run: openssl version
	I0621 19:27:49.257247   58916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 19:27:49.268736   58916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 19:27:49.276693   58916 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 19:27:49.276762   58916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 19:27:49.284811   58916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 19:27:49.303094   58916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 19:27:49.315705   58916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 19:27:49.320803   58916 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 19:27:49.320884   58916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 19:27:49.326944   58916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 19:27:49.339168   58916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 19:27:49.351496   58916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:27:49.356312   58916 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:27:49.356383   58916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:27:49.363974   58916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 19:27:49.378530   58916 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 19:27:49.384045   58916 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0621 19:27:49.384117   58916 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-371786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-371786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:27:49.384232   58916 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 19:27:49.384301   58916 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 19:27:49.430978   58916 cri.go:89] found id: ""
	I0621 19:27:49.431070   58916 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0621 19:27:49.443253   58916 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0621 19:27:49.454638   58916 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 19:27:49.465484   58916 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 19:27:49.465511   58916 kubeadm.go:156] found existing configuration files:
	
	I0621 19:27:49.465570   58916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 19:27:49.474781   58916 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 19:27:49.474842   58916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 19:27:49.484198   58916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 19:27:49.493296   58916 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 19:27:49.493368   58916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 19:27:49.504069   58916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 19:27:49.512777   58916 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 19:27:49.512854   58916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 19:27:49.521726   58916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 19:27:49.530290   58916 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 19:27:49.530358   58916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 19:27:49.538932   58916 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 19:27:49.677122   58916 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0621 19:27:49.677261   58916 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 19:27:49.826009   58916 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 19:27:49.826174   58916 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 19:27:49.826297   58916 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 19:27:50.054339   58916 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 19:27:50.222615   58916 out.go:204]   - Generating certificates and keys ...
	I0621 19:27:50.222791   58916 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 19:27:50.222923   58916 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 19:27:50.270882   58916 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0621 19:27:50.554599   58916 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0621 19:27:50.733511   58916 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0621 19:27:51.037105   58916 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0621 19:27:51.225964   58916 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0621 19:27:51.226217   58916 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-371786 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	I0621 19:27:51.304605   58916 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0621 19:27:51.304814   58916 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-371786 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	I0621 19:27:51.894510   58916 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0621 19:27:52.125601   58916 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0621 19:27:52.657642   58916 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0621 19:27:52.657963   58916 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 19:27:52.872186   58916 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 19:27:52.965144   58916 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 19:27:53.251532   58916 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 19:27:53.330253   58916 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 19:27:53.348866   58916 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 19:27:53.349194   58916 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 19:27:53.349248   58916 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 19:27:53.482012   58916 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 19:27:53.483931   58916 out.go:204]   - Booting up control plane ...
	I0621 19:27:53.484045   58916 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 19:27:53.488578   58916 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 19:27:53.490070   58916 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 19:27:53.493300   58916 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 19:27:53.498013   58916 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0621 19:28:33.486482   58916 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0621 19:28:33.486778   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:28:33.486992   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:28:38.487328   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:28:38.487542   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:28:48.486911   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:28:48.487167   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:29:08.487128   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:29:08.487403   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:29:48.487880   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:29:48.488182   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:29:48.488220   58916 kubeadm.go:309] 
	I0621 19:29:48.488296   58916 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0621 19:29:48.488352   58916 kubeadm.go:309] 		timed out waiting for the condition
	I0621 19:29:48.488361   58916 kubeadm.go:309] 
	I0621 19:29:48.488404   58916 kubeadm.go:309] 	This error is likely caused by:
	I0621 19:29:48.488444   58916 kubeadm.go:309] 		- The kubelet is not running
	I0621 19:29:48.488567   58916 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0621 19:29:48.488581   58916 kubeadm.go:309] 
	I0621 19:29:48.488726   58916 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0621 19:29:48.488790   58916 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0621 19:29:48.488838   58916 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0621 19:29:48.488857   58916 kubeadm.go:309] 
	I0621 19:29:48.489012   58916 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0621 19:29:48.489190   58916 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0621 19:29:48.489228   58916 kubeadm.go:309] 
	I0621 19:29:48.489416   58916 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0621 19:29:48.489555   58916 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0621 19:29:48.489669   58916 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0621 19:29:48.489814   58916 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0621 19:29:48.489834   58916 kubeadm.go:309] 
	I0621 19:29:48.490688   58916 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 19:29:48.490777   58916 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0621 19:29:48.490833   58916 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0621 19:29:48.490993   58916 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-371786 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-371786 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-371786 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-371786 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0621 19:29:48.491059   58916 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0621 19:29:48.947439   58916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 19:29:48.962231   58916 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0621 19:29:48.971478   58916 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0621 19:29:48.971506   58916 kubeadm.go:156] found existing configuration files:
	
	I0621 19:29:48.971557   58916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0621 19:29:48.980431   58916 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0621 19:29:48.980504   58916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0621 19:29:48.990269   58916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0621 19:29:48.999032   58916 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0621 19:29:48.999093   58916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0621 19:29:49.007653   58916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0621 19:29:49.016290   58916 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0621 19:29:49.016394   58916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0621 19:29:49.025445   58916 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0621 19:29:49.034050   58916 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0621 19:29:49.034113   58916 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0621 19:29:49.042915   58916 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0621 19:29:49.247925   58916 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0621 19:31:45.599890   58916 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0621 19:31:45.600030   58916 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0621 19:31:45.601924   58916 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0621 19:31:45.601995   58916 kubeadm.go:309] [preflight] Running pre-flight checks
	I0621 19:31:45.602112   58916 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0621 19:31:45.602287   58916 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0621 19:31:45.602419   58916 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0621 19:31:45.602506   58916 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0621 19:31:45.604240   58916 out.go:204]   - Generating certificates and keys ...
	I0621 19:31:45.604330   58916 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0621 19:31:45.604406   58916 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0621 19:31:45.604507   58916 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0621 19:31:45.604584   58916 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0621 19:31:45.604670   58916 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0621 19:31:45.604740   58916 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0621 19:31:45.604823   58916 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0621 19:31:45.604919   58916 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0621 19:31:45.605015   58916 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0621 19:31:45.605110   58916 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0621 19:31:45.605161   58916 kubeadm.go:309] [certs] Using the existing "sa" key
	I0621 19:31:45.605234   58916 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0621 19:31:45.605299   58916 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0621 19:31:45.605366   58916 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0621 19:31:45.605449   58916 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0621 19:31:45.605523   58916 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0621 19:31:45.605662   58916 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0621 19:31:45.605776   58916 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0621 19:31:45.605850   58916 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0621 19:31:45.605944   58916 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0621 19:31:45.607280   58916 out.go:204]   - Booting up control plane ...
	I0621 19:31:45.607397   58916 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0621 19:31:45.607510   58916 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0621 19:31:45.607609   58916 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0621 19:31:45.607719   58916 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0621 19:31:45.607899   58916 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0621 19:31:45.607961   58916 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0621 19:31:45.608032   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:31:45.608277   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:31:45.608351   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:31:45.608521   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:31:45.608588   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:31:45.608758   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:31:45.608824   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:31:45.609033   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:31:45.609091   58916 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0621 19:31:45.609271   58916 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0621 19:31:45.609282   58916 kubeadm.go:309] 
	I0621 19:31:45.609330   58916 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0621 19:31:45.609382   58916 kubeadm.go:309] 		timed out waiting for the condition
	I0621 19:31:45.609393   58916 kubeadm.go:309] 
	I0621 19:31:45.609437   58916 kubeadm.go:309] 	This error is likely caused by:
	I0621 19:31:45.609483   58916 kubeadm.go:309] 		- The kubelet is not running
	I0621 19:31:45.609621   58916 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0621 19:31:45.609633   58916 kubeadm.go:309] 
	I0621 19:31:45.609755   58916 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0621 19:31:45.609787   58916 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0621 19:31:45.609833   58916 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0621 19:31:45.609845   58916 kubeadm.go:309] 
	I0621 19:31:45.609988   58916 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0621 19:31:45.610063   58916 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0621 19:31:45.610081   58916 kubeadm.go:309] 
	I0621 19:31:45.610206   58916 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0621 19:31:45.610330   58916 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0621 19:31:45.610453   58916 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0621 19:31:45.610572   58916 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0621 19:31:45.610642   58916 kubeadm.go:309] 
	I0621 19:31:45.610654   58916 kubeadm.go:393] duration metric: took 3m56.226538177s to StartCluster
	I0621 19:31:45.610701   58916 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0621 19:31:45.610799   58916 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0621 19:31:45.672829   58916 cri.go:89] found id: ""
	I0621 19:31:45.672860   58916 logs.go:276] 0 containers: []
	W0621 19:31:45.672870   58916 logs.go:278] No container was found matching "kube-apiserver"
	I0621 19:31:45.672877   58916 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0621 19:31:45.672940   58916 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0621 19:31:45.714671   58916 cri.go:89] found id: ""
	I0621 19:31:45.714699   58916 logs.go:276] 0 containers: []
	W0621 19:31:45.714710   58916 logs.go:278] No container was found matching "etcd"
	I0621 19:31:45.714718   58916 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0621 19:31:45.714785   58916 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0621 19:31:45.753385   58916 cri.go:89] found id: ""
	I0621 19:31:45.753417   58916 logs.go:276] 0 containers: []
	W0621 19:31:45.753428   58916 logs.go:278] No container was found matching "coredns"
	I0621 19:31:45.753435   58916 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0621 19:31:45.753498   58916 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0621 19:31:45.794181   58916 cri.go:89] found id: ""
	I0621 19:31:45.794217   58916 logs.go:276] 0 containers: []
	W0621 19:31:45.794230   58916 logs.go:278] No container was found matching "kube-scheduler"
	I0621 19:31:45.794238   58916 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0621 19:31:45.794298   58916 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0621 19:31:45.831198   58916 cri.go:89] found id: ""
	I0621 19:31:45.831227   58916 logs.go:276] 0 containers: []
	W0621 19:31:45.831238   58916 logs.go:278] No container was found matching "kube-proxy"
	I0621 19:31:45.831245   58916 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0621 19:31:45.831300   58916 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0621 19:31:45.866586   58916 cri.go:89] found id: ""
	I0621 19:31:45.866615   58916 logs.go:276] 0 containers: []
	W0621 19:31:45.866624   58916 logs.go:278] No container was found matching "kube-controller-manager"
	I0621 19:31:45.866631   58916 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0621 19:31:45.866694   58916 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0621 19:31:45.911470   58916 cri.go:89] found id: ""
	I0621 19:31:45.911505   58916 logs.go:276] 0 containers: []
	W0621 19:31:45.911516   58916 logs.go:278] No container was found matching "kindnet"
	I0621 19:31:45.911531   58916 logs.go:123] Gathering logs for dmesg ...
	I0621 19:31:45.911549   58916 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0621 19:31:45.926281   58916 logs.go:123] Gathering logs for describe nodes ...
	I0621 19:31:45.926318   58916 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0621 19:31:46.111597   58916 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0621 19:31:46.111629   58916 logs.go:123] Gathering logs for CRI-O ...
	I0621 19:31:46.111649   58916 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0621 19:31:46.224741   58916 logs.go:123] Gathering logs for container status ...
	I0621 19:31:46.224783   58916 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0621 19:31:46.268044   58916 logs.go:123] Gathering logs for kubelet ...
	I0621 19:31:46.268075   58916 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0621 19:31:46.346390   58916 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0621 19:31:46.346465   58916 out.go:239] * 
	* 
	W0621 19:31:46.346614   58916 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0621 19:31:46.346666   58916 out.go:239] * 
	* 
	W0621 19:31:46.347926   58916 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0621 19:31:46.351610   58916 out.go:177] 
	W0621 19:31:46.352751   58916 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0621 19:31:46.352811   58916 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0621 19:31:46.352834   58916 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0621 19:31:46.354345   58916 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-371786 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-371786
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-371786: (2.028743257s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-371786 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-371786 status --format={{.Host}}: exit status 7 (73.822397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-371786 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-371786 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.514667247s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-371786 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-371786 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-371786 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (84.835292ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-371786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-371786
	    minikube start -p kubernetes-upgrade-371786 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3717862 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-371786 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-371786 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-371786 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (27.930221801s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-06-21 19:33:08.114878395 +0000 UTC m=+6722.247032580
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-371786 -n kubernetes-upgrade-371786
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-371786 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-371786 logs -n 25: (1.308061427s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:32 UTC | 21 Jun 24 19:32 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995                             | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:32 UTC | 21 Jun 24 19:33 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995                             | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995                             | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995                             | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995                             | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo cat                    | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo cat                    | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995                             | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo cat                    | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995                             | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-313995 sudo                        | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-313995                             | custom-flannel-313995 | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC | 21 Jun 24 19:33 UTC |
	| start   | -p bridge-313995 --memory=3072                       | bridge-313995         | jenkins | v1.33.1 | 21 Jun 24 19:33 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 19:33:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 19:33:06.503737   68881 out.go:291] Setting OutFile to fd 1 ...
	I0621 19:33:06.503842   68881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:33:06.503852   68881 out.go:304] Setting ErrFile to fd 2...
	I0621 19:33:06.503857   68881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:33:06.504047   68881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 19:33:06.504602   68881 out.go:298] Setting JSON to false
	I0621 19:33:06.505737   68881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8084,"bootTime":1718990302,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 19:33:06.505821   68881 start.go:139] virtualization: kvm guest
	I0621 19:33:06.508178   68881 out.go:177] * [bridge-313995] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 19:33:06.509669   68881 notify.go:220] Checking for updates...
	I0621 19:33:06.509693   68881 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 19:33:06.510937   68881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 19:33:06.512066   68881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 19:33:06.513300   68881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 19:33:06.514577   68881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 19:33:06.516000   68881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 19:33:05.528624   67351 api_server.go:253] Checking apiserver healthz at https://192.168.50.198:8443/healthz ...
	I0621 19:33:05.534230   67351 api_server.go:279] https://192.168.50.198:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0621 19:33:05.534253   67351 api_server.go:103] status: https://192.168.50.198:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0621 19:33:06.028446   67351 api_server.go:253] Checking apiserver healthz at https://192.168.50.198:8443/healthz ...
	I0621 19:33:06.034029   67351 api_server.go:279] https://192.168.50.198:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0621 19:33:06.034082   67351 api_server.go:103] status: https://192.168.50.198:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0621 19:33:06.529060   67351 api_server.go:253] Checking apiserver healthz at https://192.168.50.198:8443/healthz ...
	I0621 19:33:06.535146   67351 api_server.go:279] https://192.168.50.198:8443/healthz returned 200:
	ok
	I0621 19:33:06.542047   67351 api_server.go:141] control plane version: v1.30.2
	I0621 19:33:06.542077   67351 api_server.go:131] duration metric: took 4.514047441s to wait for apiserver health ...
	I0621 19:33:06.542085   67351 cni.go:84] Creating CNI manager for ""
	I0621 19:33:06.542091   67351 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 19:33:06.544043   67351 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0621 19:33:06.517693   68881 config.go:182] Loaded profile config "enable-default-cni-313995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:33:06.517840   68881 config.go:182] Loaded profile config "flannel-313995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:33:06.517936   68881 config.go:182] Loaded profile config "kubernetes-upgrade-371786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:33:06.518025   68881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 19:33:06.563620   68881 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 19:33:06.565157   68881 start.go:297] selected driver: kvm2
	I0621 19:33:06.565197   68881 start.go:901] validating driver "kvm2" against <nil>
	I0621 19:33:06.565213   68881 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 19:33:06.566331   68881 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:33:06.566443   68881 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 19:33:06.583293   68881 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 19:33:06.583347   68881 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 19:33:06.583612   68881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 19:33:06.583695   68881 cni.go:84] Creating CNI manager for "bridge"
	I0621 19:33:06.583713   68881 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0621 19:33:06.583783   68881 start.go:340] cluster config:
	{Name:bridge-313995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:bridge-313995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:33:06.583941   68881 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:33:06.587172   68881 out.go:177] * Starting "bridge-313995" primary control-plane node in "bridge-313995" cluster
	I0621 19:33:06.588539   68881 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 19:33:06.588578   68881 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 19:33:06.588590   68881 cache.go:56] Caching tarball of preloaded images
	I0621 19:33:06.588668   68881 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 19:33:06.588679   68881 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 19:33:06.588777   68881 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/config.json ...
	I0621 19:33:06.588808   68881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/config.json: {Name:mk1686e698d31971a5c3931beac20bf1a87f6d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:33:06.588981   68881 start.go:360] acquireMachinesLock for bridge-313995: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 19:33:06.589014   68881 start.go:364] duration metric: took 18.07µs to acquireMachinesLock for "bridge-313995"
	I0621 19:33:06.589037   68881 start.go:93] Provisioning new machine with config: &{Name:bridge-313995 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:bridge-313995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 19:33:06.589131   68881 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 19:33:06.545358   67351 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0621 19:33:06.559409   67351 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0621 19:33:06.577486   67351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0621 19:33:06.587051   67351 system_pods.go:59] 5 kube-system pods found
	I0621 19:33:06.587084   67351 system_pods.go:61] "etcd-kubernetes-upgrade-371786" [a0490a7d-6933-45a6-9d79-62e9f5527cc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0621 19:33:06.587092   67351 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-371786" [8476bcf6-db70-4a56-aa4e-4ba8beb91d21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0621 19:33:06.587105   67351 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-371786" [c420e17e-a964-47d0-a070-7ebbd63daacf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0621 19:33:06.587115   67351 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-371786" [04b468ac-294e-4098-a77d-fbab32c2f7d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0621 19:33:06.587122   67351 system_pods.go:61] "storage-provisioner" [936e5b50-70e8-4cca-af1b-1c5e5123e743] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0621 19:33:06.587132   67351 system_pods.go:74] duration metric: took 9.616241ms to wait for pod list to return data ...
	I0621 19:33:06.587146   67351 node_conditions.go:102] verifying NodePressure condition ...
	I0621 19:33:06.591503   67351 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0621 19:33:06.591527   67351 node_conditions.go:123] node cpu capacity is 2
	I0621 19:33:06.591537   67351 node_conditions.go:105] duration metric: took 4.385844ms to run NodePressure ...
	I0621 19:33:06.591552   67351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0621 19:33:06.922898   67351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 19:33:06.941236   67351 ops.go:34] apiserver oom_adj: -16
	I0621 19:33:06.941258   67351 kubeadm.go:591] duration metric: took 7.870319648s to restartPrimaryControlPlane
	I0621 19:33:06.941266   67351 kubeadm.go:393] duration metric: took 7.99418607s to StartCluster
	I0621 19:33:06.941281   67351 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:33:06.941353   67351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 19:33:06.942194   67351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:33:06.942422   67351 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 19:33:06.942530   67351 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 19:33:06.942640   67351 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-371786"
	I0621 19:33:06.942651   67351 config.go:182] Loaded profile config "kubernetes-upgrade-371786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:33:06.942673   67351 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-371786"
	W0621 19:33:06.942683   67351 addons.go:243] addon storage-provisioner should already be in state true
	I0621 19:33:06.942676   67351 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-371786"
	I0621 19:33:06.942717   67351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-371786"
	I0621 19:33:06.942720   67351 host.go:66] Checking if "kubernetes-upgrade-371786" exists ...
	I0621 19:33:06.943145   67351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:33:06.943188   67351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:33:06.943145   67351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:33:06.943293   67351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:33:06.944161   67351 out.go:177] * Verifying Kubernetes components...
	I0621 19:33:06.945556   67351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:33:06.960259   67351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0621 19:33:06.960745   67351 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:33:06.961299   67351 main.go:141] libmachine: Using API Version  1
	I0621 19:33:06.961324   67351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:33:06.961777   67351 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:33:06.962046   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetState
	I0621 19:33:06.964870   67351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0621 19:33:06.964961   67351 kapi.go:59] client config for kubernetes-upgrade-371786: &rest.Config{Host:"https://192.168.50.198:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/kubernetes-upgrade-371786/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 19:33:06.965212   67351 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-371786"
	W0621 19:33:06.965221   67351 addons.go:243] addon default-storageclass should already be in state true
	I0621 19:33:06.965245   67351 host.go:66] Checking if "kubernetes-upgrade-371786" exists ...
	I0621 19:33:06.965355   67351 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:33:06.965598   67351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:33:06.965632   67351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:33:06.965984   67351 main.go:141] libmachine: Using API Version  1
	I0621 19:33:06.965998   67351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:33:06.966326   67351 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:33:06.966820   67351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:33:06.966853   67351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:33:06.983240   67351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I0621 19:33:06.983716   67351 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:33:06.984256   67351 main.go:141] libmachine: Using API Version  1
	I0621 19:33:06.984279   67351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:33:06.984912   67351 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:33:06.985153   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetState
	I0621 19:33:06.987239   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:33:06.988977   67351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39137
	I0621 19:33:06.989299   67351 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:33:06.989332   67351 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0621 19:33:06.989733   67351 main.go:141] libmachine: Using API Version  1
	I0621 19:33:06.989746   67351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:33:06.990128   67351 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:33:06.990716   67351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:33:06.990752   67351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:33:06.990878   67351 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 19:33:06.990894   67351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0621 19:33:06.990913   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:33:06.994267   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:33:06.994709   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:32:16 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:33:06.994731   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:33:06.994983   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:33:06.995299   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:33:06.995454   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:33:06.995595   67351 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/id_rsa Username:docker}
	I0621 19:33:07.007671   67351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0621 19:33:07.008141   67351 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:33:07.008785   67351 main.go:141] libmachine: Using API Version  1
	I0621 19:33:07.008807   67351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:33:07.009150   67351 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:33:07.009538   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetState
	I0621 19:33:07.011503   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .DriverName
	I0621 19:33:07.012199   67351 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0621 19:33:07.012312   67351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0621 19:33:07.012421   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHHostname
	I0621 19:33:07.016700   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:33:07.017230   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:60:26", ip: ""} in network mk-kubernetes-upgrade-371786: {Iface:virbr2 ExpiryTime:2024-06-21 20:32:16 +0000 UTC Type:0 Mac:52:54:00:00:60:26 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-371786 Clientid:01:52:54:00:00:60:26}
	I0621 19:33:07.017270   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | domain kubernetes-upgrade-371786 has defined IP address 192.168.50.198 and MAC address 52:54:00:00:60:26 in network mk-kubernetes-upgrade-371786
	I0621 19:33:07.017366   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHPort
	I0621 19:33:07.017819   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHKeyPath
	I0621 19:33:07.018099   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .GetSSHUsername
	I0621 19:33:07.018298   67351 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/kubernetes-upgrade-371786/id_rsa Username:docker}
	I0621 19:33:07.151386   67351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 19:33:07.171223   67351 api_server.go:52] waiting for apiserver process to appear ...
	I0621 19:33:07.171318   67351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 19:33:07.185229   67351 api_server.go:72] duration metric: took 242.775399ms to wait for apiserver process to appear ...
	I0621 19:33:07.185256   67351 api_server.go:88] waiting for apiserver healthz status ...
	I0621 19:33:07.185281   67351 api_server.go:253] Checking apiserver healthz at https://192.168.50.198:8443/healthz ...
	I0621 19:33:07.189862   67351 api_server.go:279] https://192.168.50.198:8443/healthz returned 200:
	ok
	I0621 19:33:07.190770   67351 api_server.go:141] control plane version: v1.30.2
	I0621 19:33:07.190786   67351 api_server.go:131] duration metric: took 5.523836ms to wait for apiserver health ...
	I0621 19:33:07.190793   67351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0621 19:33:07.194661   67351 system_pods.go:59] 5 kube-system pods found
	I0621 19:33:07.194685   67351 system_pods.go:61] "etcd-kubernetes-upgrade-371786" [a0490a7d-6933-45a6-9d79-62e9f5527cc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0621 19:33:07.194694   67351 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-371786" [8476bcf6-db70-4a56-aa4e-4ba8beb91d21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0621 19:33:07.194706   67351 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-371786" [c420e17e-a964-47d0-a070-7ebbd63daacf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0621 19:33:07.194715   67351 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-371786" [04b468ac-294e-4098-a77d-fbab32c2f7d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0621 19:33:07.194724   67351 system_pods.go:61] "storage-provisioner" [936e5b50-70e8-4cca-af1b-1c5e5123e743] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0621 19:33:07.194733   67351 system_pods.go:74] duration metric: took 3.933127ms to wait for pod list to return data ...
	I0621 19:33:07.194749   67351 kubeadm.go:576] duration metric: took 252.299842ms to wait for: map[apiserver:true system_pods:true]
	I0621 19:33:07.194762   67351 node_conditions.go:102] verifying NodePressure condition ...
	I0621 19:33:07.196670   67351 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0621 19:33:07.196689   67351 node_conditions.go:123] node cpu capacity is 2
	I0621 19:33:07.196700   67351 node_conditions.go:105] duration metric: took 1.933003ms to run NodePressure ...
	I0621 19:33:07.196713   67351 start.go:240] waiting for startup goroutines ...
	I0621 19:33:07.241928   67351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0621 19:33:07.324207   67351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0621 19:33:07.407332   67351 main.go:141] libmachine: Making call to close driver server
	I0621 19:33:07.407364   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .Close
	I0621 19:33:07.409252   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Closing plugin on server side
	I0621 19:33:07.409267   67351 main.go:141] libmachine: Successfully made call to close driver server
	I0621 19:33:07.409283   67351 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 19:33:07.409297   67351 main.go:141] libmachine: Making call to close driver server
	I0621 19:33:07.409307   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .Close
	I0621 19:33:07.409577   67351 main.go:141] libmachine: Successfully made call to close driver server
	I0621 19:33:07.409593   67351 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 19:33:07.415552   67351 main.go:141] libmachine: Making call to close driver server
	I0621 19:33:07.415570   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .Close
	I0621 19:33:07.415830   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Closing plugin on server side
	I0621 19:33:07.415858   67351 main.go:141] libmachine: Successfully made call to close driver server
	I0621 19:33:07.415896   67351 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 19:33:08.036174   67351 main.go:141] libmachine: Making call to close driver server
	I0621 19:33:08.036197   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .Close
	I0621 19:33:08.036515   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Closing plugin on server side
	I0621 19:33:08.036552   67351 main.go:141] libmachine: Successfully made call to close driver server
	I0621 19:33:08.036561   67351 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 19:33:08.036574   67351 main.go:141] libmachine: Making call to close driver server
	I0621 19:33:08.036586   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) Calling .Close
	I0621 19:33:08.036837   67351 main.go:141] libmachine: Successfully made call to close driver server
	I0621 19:33:08.036853   67351 main.go:141] libmachine: Making call to close connection to plugin binary
	I0621 19:33:08.036866   67351 main.go:141] libmachine: (kubernetes-upgrade-371786) DBG | Closing plugin on server side
	I0621 19:33:08.038863   67351 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0621 19:33:08.040110   67351 addons.go:510] duration metric: took 1.097595299s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0621 19:33:08.040150   67351 start.go:245] waiting for cluster config update ...
	I0621 19:33:08.040164   67351 start.go:254] writing updated cluster config ...
	I0621 19:33:08.040394   67351 ssh_runner.go:195] Run: rm -f paused
	I0621 19:33:08.095991   67351 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0621 19:33:08.097844   67351 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-371786" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.793474897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998388793453719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84ad9c72-df06-4c33-8bfe-939ba6ab2fa3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.793887691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26de2bf5-e436-4d02-b7de-fad1162fc9b8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.793946950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26de2bf5-e436-4d02-b7de-fad1162fc9b8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.794251131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a4a5d9203c4bd116c919c58fe80fa3dfef3f00468f70b4ac05d4540400e3e6,PodSandboxId:38dd92450d5132a83d5e255a77cb3c9ba42b7e9cdbaf3378926423e4c4800101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998381445955630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bae1e4a78ef2c11969d5de1d3662a86,},Annotations:map[string]string{io.kubernetes.container.hash: 979832ee,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88309ad05c98967a73d41dc8e24b01e7e5ed2bc79884489c585861b33547c6dc,PodSandboxId:3fa5ca4058bacc5ab7edd4c1244b43242bd812a645221f0b295c53cfdc8a0310,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998381425646484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd6d724f2b75866fded65e5b50a3b283,},Annotations:map[string]string{io.kubernetes.container.hash: e5722f2d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47727988e6ef126d691ae66c70d03ca62fd4d1afd6f2ca49ae1ea0276339e495,PodSandboxId:63542b6342b92ba2f86a97c4497a9b9a54674c00446d1ed0cfa15bed8e5bbc47,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998381463100621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af704b8374343b64e5ab744c61b16604,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61b57e4e963b80f921182bb94f1a56ecc5088be0b5b420e8b6feaa29d7f27b26,PodSandboxId:d8ba49ce6fcd10da2e8ba59ccc8ae6c5308a0ce60c9fb5d18c5debb487870150,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998381438932505,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d078dac7022e397d152afc30d2f842ec,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf199d681fb8b9602c09b3936112a0e1e0083f411802a63be19dda6d0206b835,PodSandboxId:11f982b4e5c548bca924492c8d285b95aaf2d91ffe613c138c4e18feac6b80e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998368184216999,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af704b8374343b64e5ab744c61b16604,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03e40c667979a7de11f91fa58f1bc90c88bac6183ce0cc5f7ac8b6a23b311b43,PodSandboxId:adb4ac3610faaa998e01e562bb11bd91e8e8915a0faf7d62f52728541f0076cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998368148539961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd6d724f2b75866fded65e5b50a3b283,},Annotations:map[string]string{io.kubernetes.container.hash: e5722f2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d765e8a4db7c879e2289a421f70b5406884218c2d0e318623dee0b0df6c3dc4b,PodSandboxId:81a350f2f9b6d0af8441ae2bc39e12ff0d33dc6cdd24e42b527c27b041fee6f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998368105743449,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d078dac7022e397d152afc30d2f842ec,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8b89bba763a1933ea79fae110045cf84dde67a3bd02bc7000a738c583d926b,PodSandboxId:c721a924ac3c073abc15062d1c0a46420295c5ed9899e999506893bb98c712c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998368071679371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bae1e4a78ef2c11969d5de1d3662a86,},Annotations:map[string]string{io.kubernetes.container.hash: 979832ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26de2bf5-e436-4d02-b7de-fad1162fc9b8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.842557573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8b37f6e-4325-477b-9f4c-a91679e7e950 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.842630890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8b37f6e-4325-477b-9f4c-a91679e7e950 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.844158829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1af891ec-6ccc-44dd-abb7-30cf419aa577 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.844574360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998388844550477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1af891ec-6ccc-44dd-abb7-30cf419aa577 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.845226218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c308721e-6c09-4f0c-a0fd-e631594133ac name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.845305926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c308721e-6c09-4f0c-a0fd-e631594133ac name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.845474455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a4a5d9203c4bd116c919c58fe80fa3dfef3f00468f70b4ac05d4540400e3e6,PodSandboxId:38dd92450d5132a83d5e255a77cb3c9ba42b7e9cdbaf3378926423e4c4800101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998381445955630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bae1e4a78ef2c11969d5de1d3662a86,},Annotations:map[string]string{io.kubernetes.container.hash: 979832ee,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88309ad05c98967a73d41dc8e24b01e7e5ed2bc79884489c585861b33547c6dc,PodSandboxId:3fa5ca4058bacc5ab7edd4c1244b43242bd812a645221f0b295c53cfdc8a0310,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998381425646484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd6d724f2b75866fded65e5b50a3b283,},Annotations:map[string]string{io.kubernetes.container.hash: e5722f2d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47727988e6ef126d691ae66c70d03ca62fd4d1afd6f2ca49ae1ea0276339e495,PodSandboxId:63542b6342b92ba2f86a97c4497a9b9a54674c00446d1ed0cfa15bed8e5bbc47,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998381463100621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af704b8374343b64e5ab744c61b16604,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61b57e4e963b80f921182bb94f1a56ecc5088be0b5b420e8b6feaa29d7f27b26,PodSandboxId:d8ba49ce6fcd10da2e8ba59ccc8ae6c5308a0ce60c9fb5d18c5debb487870150,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998381438932505,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d078dac7022e397d152afc30d2f842ec,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf199d681fb8b9602c09b3936112a0e1e0083f411802a63be19dda6d0206b835,PodSandboxId:11f982b4e5c548bca924492c8d285b95aaf2d91ffe613c138c4e18feac6b80e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998368184216999,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af704b8374343b64e5ab744c61b16604,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03e40c667979a7de11f91fa58f1bc90c88bac6183ce0cc5f7ac8b6a23b311b43,PodSandboxId:adb4ac3610faaa998e01e562bb11bd91e8e8915a0faf7d62f52728541f0076cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998368148539961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd6d724f2b75866fded65e5b50a3b283,},Annotations:map[string]string{io.kubernetes.container.hash: e5722f2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d765e8a4db7c879e2289a421f70b5406884218c2d0e318623dee0b0df6c3dc4b,PodSandboxId:81a350f2f9b6d0af8441ae2bc39e12ff0d33dc6cdd24e42b527c27b041fee6f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998368105743449,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d078dac7022e397d152afc30d2f842ec,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8b89bba763a1933ea79fae110045cf84dde67a3bd02bc7000a738c583d926b,PodSandboxId:c721a924ac3c073abc15062d1c0a46420295c5ed9899e999506893bb98c712c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998368071679371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bae1e4a78ef2c11969d5de1d3662a86,},Annotations:map[string]string{io.kubernetes.container.hash: 979832ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c308721e-6c09-4f0c-a0fd-e631594133ac name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.897002096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f86336e-7e44-42b0-b5f9-9fe7fe090b80 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.897208872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f86336e-7e44-42b0-b5f9-9fe7fe090b80 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.898477344Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bce93b0-59ee-4aa4-9c81-610d86dad13f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.899088320Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998388899005090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bce93b0-59ee-4aa4-9c81-610d86dad13f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.900023128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4193b038-cfa2-4625-8225-6c8bbcd30a3e name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.900244592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4193b038-cfa2-4625-8225-6c8bbcd30a3e name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.900710261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a4a5d9203c4bd116c919c58fe80fa3dfef3f00468f70b4ac05d4540400e3e6,PodSandboxId:38dd92450d5132a83d5e255a77cb3c9ba42b7e9cdbaf3378926423e4c4800101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998381445955630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bae1e4a78ef2c11969d5de1d3662a86,},Annotations:map[string]string{io.kubernetes.container.hash: 979832ee,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88309ad05c98967a73d41dc8e24b01e7e5ed2bc79884489c585861b33547c6dc,PodSandboxId:3fa5ca4058bacc5ab7edd4c1244b43242bd812a645221f0b295c53cfdc8a0310,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998381425646484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd6d724f2b75866fded65e5b50a3b283,},Annotations:map[string]string{io.kubernetes.container.hash: e5722f2d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47727988e6ef126d691ae66c70d03ca62fd4d1afd6f2ca49ae1ea0276339e495,PodSandboxId:63542b6342b92ba2f86a97c4497a9b9a54674c00446d1ed0cfa15bed8e5bbc47,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998381463100621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af704b8374343b64e5ab744c61b16604,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61b57e4e963b80f921182bb94f1a56ecc5088be0b5b420e8b6feaa29d7f27b26,PodSandboxId:d8ba49ce6fcd10da2e8ba59ccc8ae6c5308a0ce60c9fb5d18c5debb487870150,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998381438932505,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d078dac7022e397d152afc30d2f842ec,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf199d681fb8b9602c09b3936112a0e1e0083f411802a63be19dda6d0206b835,PodSandboxId:11f982b4e5c548bca924492c8d285b95aaf2d91ffe613c138c4e18feac6b80e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998368184216999,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af704b8374343b64e5ab744c61b16604,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03e40c667979a7de11f91fa58f1bc90c88bac6183ce0cc5f7ac8b6a23b311b43,PodSandboxId:adb4ac3610faaa998e01e562bb11bd91e8e8915a0faf7d62f52728541f0076cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998368148539961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd6d724f2b75866fded65e5b50a3b283,},Annotations:map[string]string{io.kubernetes.container.hash: e5722f2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d765e8a4db7c879e2289a421f70b5406884218c2d0e318623dee0b0df6c3dc4b,PodSandboxId:81a350f2f9b6d0af8441ae2bc39e12ff0d33dc6cdd24e42b527c27b041fee6f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998368105743449,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d078dac7022e397d152afc30d2f842ec,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8b89bba763a1933ea79fae110045cf84dde67a3bd02bc7000a738c583d926b,PodSandboxId:c721a924ac3c073abc15062d1c0a46420295c5ed9899e999506893bb98c712c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998368071679371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bae1e4a78ef2c11969d5de1d3662a86,},Annotations:map[string]string{io.kubernetes.container.hash: 979832ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4193b038-cfa2-4625-8225-6c8bbcd30a3e name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.943846806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c483093e-10a7-471b-9617-aab11c6091c8 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.943929103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c483093e-10a7-471b-9617-aab11c6091c8 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.945345582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc26c37f-853e-4adc-9d6f-54cd44e348fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.945719405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998388945697640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc26c37f-853e-4adc-9d6f-54cd44e348fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.946400066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1d94d35-f828-46f6-983c-b833048743d9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.946455359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1d94d35-f828-46f6-983c-b833048743d9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:33:08 kubernetes-upgrade-371786 crio[1882]: time="2024-06-21 19:33:08.946641149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a4a5d9203c4bd116c919c58fe80fa3dfef3f00468f70b4ac05d4540400e3e6,PodSandboxId:38dd92450d5132a83d5e255a77cb3c9ba42b7e9cdbaf3378926423e4c4800101,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998381445955630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bae1e4a78ef2c11969d5de1d3662a86,},Annotations:map[string]string{io.kubernetes.container.hash: 979832ee,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88309ad05c98967a73d41dc8e24b01e7e5ed2bc79884489c585861b33547c6dc,PodSandboxId:3fa5ca4058bacc5ab7edd4c1244b43242bd812a645221f0b295c53cfdc8a0310,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998381425646484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd6d724f2b75866fded65e5b50a3b283,},Annotations:map[string]string{io.kubernetes.container.hash: e5722f2d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47727988e6ef126d691ae66c70d03ca62fd4d1afd6f2ca49ae1ea0276339e495,PodSandboxId:63542b6342b92ba2f86a97c4497a9b9a54674c00446d1ed0cfa15bed8e5bbc47,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998381463100621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af704b8374343b64e5ab744c61b16604,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61b57e4e963b80f921182bb94f1a56ecc5088be0b5b420e8b6feaa29d7f27b26,PodSandboxId:d8ba49ce6fcd10da2e8ba59ccc8ae6c5308a0ce60c9fb5d18c5debb487870150,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998381438932505,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d078dac7022e397d152afc30d2f842ec,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf199d681fb8b9602c09b3936112a0e1e0083f411802a63be19dda6d0206b835,PodSandboxId:11f982b4e5c548bca924492c8d285b95aaf2d91ffe613c138c4e18feac6b80e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998368184216999,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af704b8374343b64e5ab744c61b16604,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03e40c667979a7de11f91fa58f1bc90c88bac6183ce0cc5f7ac8b6a23b311b43,PodSandboxId:adb4ac3610faaa998e01e562bb11bd91e8e8915a0faf7d62f52728541f0076cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998368148539961,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd6d724f2b75866fded65e5b50a3b283,},Annotations:map[string]string{io.kubernetes.container.hash: e5722f2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d765e8a4db7c879e2289a421f70b5406884218c2d0e318623dee0b0df6c3dc4b,PodSandboxId:81a350f2f9b6d0af8441ae2bc39e12ff0d33dc6cdd24e42b527c27b041fee6f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998368105743449,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d078dac7022e397d152afc30d2f842ec,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8b89bba763a1933ea79fae110045cf84dde67a3bd02bc7000a738c583d926b,PodSandboxId:c721a924ac3c073abc15062d1c0a46420295c5ed9899e999506893bb98c712c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998368071679371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-371786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bae1e4a78ef2c11969d5de1d3662a86,},Annotations:map[string]string{io.kubernetes.container.hash: 979832ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1d94d35-f828-46f6-983c-b833048743d9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	47727988e6ef1       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   7 seconds ago       Running             kube-controller-manager   2                   63542b6342b92       kube-controller-manager-kubernetes-upgrade-371786
	20a4a5d9203c4       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   7 seconds ago       Running             kube-apiserver            2                   38dd92450d513       kube-apiserver-kubernetes-upgrade-371786
	61b57e4e963b8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   7 seconds ago       Running             kube-scheduler            2                   d8ba49ce6fcd1       kube-scheduler-kubernetes-upgrade-371786
	88309ad05c989       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago       Running             etcd                      2                   3fa5ca4058bac       etcd-kubernetes-upgrade-371786
	bf199d681fb8b       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   20 seconds ago      Exited              kube-controller-manager   1                   11f982b4e5c54       kube-controller-manager-kubernetes-upgrade-371786
	03e40c667979a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   20 seconds ago      Exited              etcd                      1                   adb4ac3610faa       etcd-kubernetes-upgrade-371786
	d765e8a4db7c8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   20 seconds ago      Exited              kube-scheduler            1                   81a350f2f9b6d       kube-scheduler-kubernetes-upgrade-371786
	2a8b89bba763a       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   20 seconds ago      Exited              kube-apiserver            1                   c721a924ac3c0       kube-apiserver-kubernetes-upgrade-371786
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-371786
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-371786
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 19:32:36 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-371786
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 19:33:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 19:33:05 +0000   Fri, 21 Jun 2024 19:32:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 19:33:05 +0000   Fri, 21 Jun 2024 19:32:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 19:33:05 +0000   Fri, 21 Jun 2024 19:32:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 19:33:05 +0000   Fri, 21 Jun 2024 19:32:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.198
	  Hostname:    kubernetes-upgrade-371786
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c27cf8bcd3d04da287dc0b05d99dd76d
	  System UUID:                c27cf8bc-d3d0-4da2-87dc-0b05d99dd76d
	  Boot ID:                    31d282c6-0aa3-411d-b716-d2ccaa503807
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-371786                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         31s
	  kube-system                 kube-apiserver-kubernetes-upgrade-371786             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-371786    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kube-scheduler-kubernetes-upgrade-371786             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 38s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  37s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  36s (x8 over 38s)  kubelet  Node kubernetes-upgrade-371786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 38s)  kubelet  Node kubernetes-upgrade-371786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x7 over 38s)  kubelet  Node kubernetes-upgrade-371786 status is now: NodeHasSufficientPID
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-371786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-371786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet  Node kubernetes-upgrade-371786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +1.624200] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.967633] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.069737] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070985] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.212231] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.130679] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.283819] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +4.265572] systemd-fstab-generator[724]: Ignoring "noauto" option for root device
	[  +0.069276] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.794690] systemd-fstab-generator[848]: Ignoring "noauto" option for root device
	[  +7.754401] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.109776] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.600303] kauditd_printk_skb: 18 callbacks suppressed
	[  +2.112776] systemd-fstab-generator[1795]: Ignoring "noauto" option for root device
	[  +0.227976] systemd-fstab-generator[1809]: Ignoring "noauto" option for root device
	[  +0.235147] systemd-fstab-generator[1825]: Ignoring "noauto" option for root device
	[  +0.215971] systemd-fstab-generator[1837]: Ignoring "noauto" option for root device
	[  +0.382834] systemd-fstab-generator[1865]: Ignoring "noauto" option for root device
	[  +8.033786] kauditd_printk_skb: 143 callbacks suppressed
	[  +0.083791] systemd-fstab-generator[2082]: Ignoring "noauto" option for root device
	[Jun21 19:33] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	[  +6.321562] systemd-fstab-generator[2584]: Ignoring "noauto" option for root device
	[  +0.098314] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [03e40c667979a7de11f91fa58f1bc90c88bac6183ce0cc5f7ac8b6a23b311b43] <==
	{"level":"info","ts":"2024-06-21T19:32:50.06126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 660bac99608ea9d0 elected leader 660bac99608ea9d0 at term 3"}
	{"level":"info","ts":"2024-06-21T19:32:50.070384Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"660bac99608ea9d0","local-member-attributes":"{Name:kubernetes-upgrade-371786 ClientURLs:[https://192.168.50.198:2379]}","request-path":"/0/members/660bac99608ea9d0/attributes","cluster-id":"733f14de9f5dcd1c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T19:32:50.071117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:32:50.071565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:32:50.081217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T19:32:50.087421Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.198:2379"}
	{"level":"info","ts":"2024-06-21T19:32:50.090724Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T19:32:50.098126Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T19:32:50.169833Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-21T19:32:50.171894Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-371786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.198:2380"],"advertise-client-urls":["https://192.168.50.198:2379"]}
	{"level":"warn","ts":"2024-06-21T19:32:50.173238Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:32:50.175267Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:32:50.175888Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:46418","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:46418: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:32:50.178239Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:46438","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:46438: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:32:50.178263Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:46402","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:46402: use of closed network connection"}
	2024/06/21 19:32:50 WARNING: [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: failed to write client preface: write tcp 127.0.0.1:46438->127.0.0.1:2379: write: broken pipe"
	2024/06/21 19:32:51 WARNING: [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	2024/06/21 19:32:52 WARNING: [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	2024/06/21 19:32:55 WARNING: [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2024-06-21T19:32:57.174201Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.198:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-21T19:32:57.17426Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.198:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-21T19:32:57.174364Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"660bac99608ea9d0","current-leader-member-id":"660bac99608ea9d0"}
	{"level":"info","ts":"2024-06-21T19:32:57.179771Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.198:2380"}
	{"level":"info","ts":"2024-06-21T19:32:57.179918Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.198:2380"}
	{"level":"info","ts":"2024-06-21T19:32:57.179961Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-371786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.198:2380"],"advertise-client-urls":["https://192.168.50.198:2379"]}
	
	
	==> etcd [88309ad05c98967a73d41dc8e24b01e7e5ed2bc79884489c585861b33547c6dc] <==
	{"level":"info","ts":"2024-06-21T19:33:01.938359Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T19:33:01.938389Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T19:33:01.938657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"660bac99608ea9d0 switched to configuration voters=(7353160591362402768)"}
	{"level":"info","ts":"2024-06-21T19:33:01.938766Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"733f14de9f5dcd1c","local-member-id":"660bac99608ea9d0","added-peer-id":"660bac99608ea9d0","added-peer-peer-urls":["https://192.168.50.198:2380"]}
	{"level":"info","ts":"2024-06-21T19:33:01.938922Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"733f14de9f5dcd1c","local-member-id":"660bac99608ea9d0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T19:33:01.93898Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T19:33:01.956303Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-21T19:33:01.956517Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.198:2380"}
	{"level":"info","ts":"2024-06-21T19:33:01.956582Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.198:2380"}
	{"level":"info","ts":"2024-06-21T19:33:01.956584Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"660bac99608ea9d0","initial-advertise-peer-urls":["https://192.168.50.198:2380"],"listen-peer-urls":["https://192.168.50.198:2380"],"advertise-client-urls":["https://192.168.50.198:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.198:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-21T19:33:01.961165Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-21T19:33:03.15011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"660bac99608ea9d0 is starting a new election at term 3"}
	{"level":"info","ts":"2024-06-21T19:33:03.150219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"660bac99608ea9d0 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-06-21T19:33:03.150276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"660bac99608ea9d0 received MsgPreVoteResp from 660bac99608ea9d0 at term 3"}
	{"level":"info","ts":"2024-06-21T19:33:03.150312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"660bac99608ea9d0 became candidate at term 4"}
	{"level":"info","ts":"2024-06-21T19:33:03.150343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"660bac99608ea9d0 received MsgVoteResp from 660bac99608ea9d0 at term 4"}
	{"level":"info","ts":"2024-06-21T19:33:03.150371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"660bac99608ea9d0 became leader at term 4"}
	{"level":"info","ts":"2024-06-21T19:33:03.150402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 660bac99608ea9d0 elected leader 660bac99608ea9d0 at term 4"}
	{"level":"info","ts":"2024-06-21T19:33:03.155666Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"660bac99608ea9d0","local-member-attributes":"{Name:kubernetes-upgrade-371786 ClientURLs:[https://192.168.50.198:2379]}","request-path":"/0/members/660bac99608ea9d0/attributes","cluster-id":"733f14de9f5dcd1c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-21T19:33:03.155936Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:33:03.158704Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:33:03.159077Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T19:33:03.159134Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-21T19:33:03.160716Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.198:2379"}
	{"level":"info","ts":"2024-06-21T19:33:03.161025Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:33:09 up 1 min,  0 users,  load average: 1.82, 0.51, 0.17
	Linux kubernetes-upgrade-371786 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [20a4a5d9203c4bd116c919c58fe80fa3dfef3f00468f70b4ac05d4540400e3e6] <==
	I0621 19:33:04.818208       1 controller.go:116] Starting legacy_token_tracking_controller
	I0621 19:33:04.819855       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0621 19:33:04.938354       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 19:33:04.940115       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 19:33:04.940188       1 policy_source.go:224] refreshing policies
	I0621 19:33:04.942912       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 19:33:05.017480       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0621 19:33:05.017591       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0621 19:33:05.018984       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0621 19:33:05.020813       1 shared_informer.go:320] Caches are synced for configmaps
	I0621 19:33:05.023419       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0621 19:33:05.025137       1 aggregator.go:165] initial CRD sync complete...
	I0621 19:33:05.025155       1 autoregister_controller.go:141] Starting autoregister controller
	I0621 19:33:05.025216       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0621 19:33:05.025238       1 cache.go:39] Caches are synced for autoregister controller
	I0621 19:33:05.033902       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0621 19:33:05.037667       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0621 19:33:05.051918       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0621 19:33:05.057343       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0621 19:33:05.823171       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 19:33:06.712820       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 19:33:06.735271       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 19:33:06.778605       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 19:33:06.871932       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 19:33:06.886485       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [2a8b89bba763a1933ea79fae110045cf84dde67a3bd02bc7000a738c583d926b] <==
	I0621 19:32:48.513235       1 options.go:221] external host was not specified, using 192.168.50.198
	I0621 19:32:48.514460       1 server.go:148] Version: v1.30.2
	I0621 19:32:48.514506       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:32:50.045444       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0621 19:32:50.068198       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 19:32:50.073352       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0621 19:32:50.073516       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0621 19:32:50.073767       1 instance.go:299] Using reconciler: lease
	
	
	==> kube-controller-manager [47727988e6ef126d691ae66c70d03ca62fd4d1afd6f2ca49ae1ea0276339e495] <==
	I0621 19:33:08.274159       1 shared_informer.go:313] Waiting for caches to sync for job
	I0621 19:33:08.422471       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0621 19:33:08.422554       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0621 19:33:08.422564       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0621 19:33:08.573477       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0621 19:33:08.573631       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0621 19:33:08.573646       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0621 19:33:08.725612       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0621 19:33:08.725769       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0621 19:33:08.725837       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0621 19:33:08.872196       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0621 19:33:08.872277       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0621 19:33:08.872287       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0621 19:33:09.022852       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0621 19:33:09.022930       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0621 19:33:09.022939       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0621 19:33:09.175010       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0621 19:33:09.175281       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0621 19:33:09.175421       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0621 19:33:09.323126       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0621 19:33:09.323266       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0621 19:33:09.323306       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0621 19:33:09.323318       1 shared_informer.go:320] Caches are synced for token_cleaner
	E0621 19:33:09.369670       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0621 19:33:09.369692       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	
	
	==> kube-controller-manager [bf199d681fb8b9602c09b3936112a0e1e0083f411802a63be19dda6d0206b835] <==
	
	
	==> kube-scheduler [61b57e4e963b80f921182bb94f1a56ecc5088be0b5b420e8b6feaa29d7f27b26] <==
	I0621 19:33:02.876336       1 serving.go:380] Generated self-signed cert in-memory
	W0621 19:33:04.897451       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0621 19:33:04.897614       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 19:33:04.897712       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0621 19:33:04.897745       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0621 19:33:04.957632       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0621 19:33:04.957665       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:33:04.962951       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0621 19:33:04.963592       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0621 19:33:04.964205       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 19:33:04.963622       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0621 19:33:05.065337       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d765e8a4db7c879e2289a421f70b5406884218c2d0e318623dee0b0df6c3dc4b] <==
	
	
	==> kubelet <==
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.176943    2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af704b8374343b64e5ab744c61b16604-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-371786\" (UID: \"af704b8374343b64e5ab744c61b16604\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.176961    2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af704b8374343b64e5ab744c61b16604-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-371786\" (UID: \"af704b8374343b64e5ab744c61b16604\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.176976    2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af704b8374343b64e5ab744c61b16604-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-371786\" (UID: \"af704b8374343b64e5ab744c61b16604\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.177016    2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d078dac7022e397d152afc30d2f842ec-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-371786\" (UID: \"d078dac7022e397d152afc30d2f842ec\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.177032    2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bae1e4a78ef2c11969d5de1d3662a86-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-371786\" (UID: \"7bae1e4a78ef2c11969d5de1d3662a86\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.177098    2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bae1e4a78ef2c11969d5de1d3662a86-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-371786\" (UID: \"7bae1e4a78ef2c11969d5de1d3662a86\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.177115    2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bae1e4a78ef2c11969d5de1d3662a86-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-371786\" (UID: \"7bae1e4a78ef2c11969d5de1d3662a86\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.177185    2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af704b8374343b64e5ab744c61b16604-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-371786\" (UID: \"af704b8374343b64e5ab744c61b16604\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.177215    2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af704b8374343b64e5ab744c61b16604-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-371786\" (UID: \"af704b8374343b64e5ab744c61b16604\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.274556    2329 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: E0621 19:33:01.275673    2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.198:8443: connect: connection refused" node="kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.399631    2329 scope.go:117] "RemoveContainer" containerID="03e40c667979a7de11f91fa58f1bc90c88bac6183ce0cc5f7ac8b6a23b311b43"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.401309    2329 scope.go:117] "RemoveContainer" containerID="2a8b89bba763a1933ea79fae110045cf84dde67a3bd02bc7000a738c583d926b"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.402807    2329 scope.go:117] "RemoveContainer" containerID="bf199d681fb8b9602c09b3936112a0e1e0083f411802a63be19dda6d0206b835"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.403566    2329 scope.go:117] "RemoveContainer" containerID="d765e8a4db7c879e2289a421f70b5406884218c2d0e318623dee0b0df6c3dc4b"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: E0621 19:33:01.574135    2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-371786?timeout=10s\": dial tcp 192.168.50.198:8443: connect: connection refused" interval="800ms"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:01.677470    2329 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: E0621 19:33:01.678352    2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.198:8443: connect: connection refused" node="kubernetes-upgrade-371786"
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: W0621 19:33:01.863231    2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.198:8443: connect: connection refused
	Jun 21 19:33:01 kubernetes-upgrade-371786 kubelet[2329]: E0621 19:33:01.863314    2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.198:8443: connect: connection refused
	Jun 21 19:33:02 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:02.481742    2329 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-371786"
	Jun 21 19:33:04 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:04.933391    2329 apiserver.go:52] "Watching apiserver"
	Jun 21 19:33:04 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:04.975595    2329 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 21 19:33:04 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:04.997525    2329 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-371786"
	Jun 21 19:33:04 kubernetes-upgrade-371786 kubelet[2329]: I0621 19:33:04.997839    2329 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-371786"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-371786 -n kubernetes-upgrade-371786
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-371786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-371786 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-371786 describe pod storage-provisioner: exit status 1 (67.004511ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-371786 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-371786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-371786
--- FAIL: TestKubernetesUpgrade (382.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-709611 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-709611 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.704207385s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-709611] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-709611" primary control-plane node in "pause-709611" cluster
	* Updating the running kvm2 "pause-709611" VM ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-709611" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 19:28:28.790116   59947 out.go:291] Setting OutFile to fd 1 ...
	I0621 19:28:28.790382   59947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:28:28.790391   59947 out.go:304] Setting ErrFile to fd 2...
	I0621 19:28:28.790396   59947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:28:28.790604   59947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 19:28:28.791220   59947 out.go:298] Setting JSON to false
	I0621 19:28:28.792187   59947 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7807,"bootTime":1718990302,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 19:28:28.792250   59947 start.go:139] virtualization: kvm guest
	I0621 19:28:28.794019   59947 out.go:177] * [pause-709611] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 19:28:28.795360   59947 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 19:28:28.795384   59947 notify.go:220] Checking for updates...
	I0621 19:28:28.797762   59947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 19:28:28.799299   59947 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 19:28:28.800564   59947 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 19:28:28.801772   59947 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 19:28:28.802876   59947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 19:28:28.804466   59947 config.go:182] Loaded profile config "pause-709611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:28:28.804943   59947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:28:28.805005   59947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:28:28.821298   59947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I0621 19:28:28.821753   59947 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:28:28.822420   59947 main.go:141] libmachine: Using API Version  1
	I0621 19:28:28.822445   59947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:28:28.822801   59947 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:28:28.823023   59947 main.go:141] libmachine: (pause-709611) Calling .DriverName
	I0621 19:28:28.823267   59947 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 19:28:28.823564   59947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:28:28.823608   59947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:28:28.839362   59947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40155
	I0621 19:28:28.839807   59947 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:28:28.840274   59947 main.go:141] libmachine: Using API Version  1
	I0621 19:28:28.840295   59947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:28:28.840760   59947 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:28:28.841004   59947 main.go:141] libmachine: (pause-709611) Calling .DriverName
	I0621 19:28:28.878097   59947 out.go:177] * Using the kvm2 driver based on existing profile
	I0621 19:28:28.879485   59947 start.go:297] selected driver: kvm2
	I0621 19:28:28.879514   59947 start.go:901] validating driver "kvm2" against &{Name:pause-709611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.2 ClusterName:pause-709611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:28:28.879673   59947 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 19:28:28.880145   59947 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:28:28.880246   59947 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 19:28:28.895272   59947 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 19:28:28.896002   59947 cni.go:84] Creating CNI manager for ""
	I0621 19:28:28.896025   59947 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 19:28:28.896100   59947 start.go:340] cluster config:
	{Name:pause-709611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:pause-709611 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:28:28.896285   59947 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:28:28.899213   59947 out.go:177] * Starting "pause-709611" primary control-plane node in "pause-709611" cluster
	I0621 19:28:28.900571   59947 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 19:28:28.900605   59947 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 19:28:28.900613   59947 cache.go:56] Caching tarball of preloaded images
	I0621 19:28:28.900694   59947 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 19:28:28.900708   59947 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 19:28:28.900839   59947 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/pause-709611/config.json ...
	I0621 19:28:28.901062   59947 start.go:360] acquireMachinesLock for pause-709611: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 19:28:28.901113   59947 start.go:364] duration metric: took 30.548µs to acquireMachinesLock for "pause-709611"
	I0621 19:28:28.901133   59947 start.go:96] Skipping create...Using existing machine configuration
	I0621 19:28:28.901139   59947 fix.go:54] fixHost starting: 
	I0621 19:28:28.901391   59947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:28:28.901424   59947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:28:28.916913   59947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0621 19:28:28.917341   59947 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:28:28.917855   59947 main.go:141] libmachine: Using API Version  1
	I0621 19:28:28.917877   59947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:28:28.918197   59947 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:28:28.918416   59947 main.go:141] libmachine: (pause-709611) Calling .DriverName
	I0621 19:28:28.918594   59947 main.go:141] libmachine: (pause-709611) Calling .GetState
	I0621 19:28:28.920333   59947 fix.go:112] recreateIfNeeded on pause-709611: state=Running err=<nil>
	W0621 19:28:28.920360   59947 fix.go:138] unexpected machine state, will restart: <nil>
	I0621 19:28:28.922365   59947 out.go:177] * Updating the running kvm2 "pause-709611" VM ...
	I0621 19:28:28.923774   59947 machine.go:94] provisionDockerMachine start ...
	I0621 19:28:28.923799   59947 main.go:141] libmachine: (pause-709611) Calling .DriverName
	I0621 19:28:28.923997   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:28.926882   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:28.927366   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:28.927398   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:28.927562   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHPort
	I0621 19:28:28.927706   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:28.927834   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:28.927944   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHUsername
	I0621 19:28:28.928073   59947 main.go:141] libmachine: Using SSH client type: native
	I0621 19:28:28.928266   59947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0621 19:28:28.928276   59947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0621 19:28:29.038474   59947 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-709611
	
	I0621 19:28:29.038502   59947 main.go:141] libmachine: (pause-709611) Calling .GetMachineName
	I0621 19:28:29.038766   59947 buildroot.go:166] provisioning hostname "pause-709611"
	I0621 19:28:29.038797   59947 main.go:141] libmachine: (pause-709611) Calling .GetMachineName
	I0621 19:28:29.038995   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:29.042183   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.042638   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:29.042688   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.042867   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHPort
	I0621 19:28:29.043061   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:29.043249   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:29.043438   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHUsername
	I0621 19:28:29.043647   59947 main.go:141] libmachine: Using SSH client type: native
	I0621 19:28:29.043897   59947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0621 19:28:29.043919   59947 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-709611 && echo "pause-709611" | sudo tee /etc/hostname
	I0621 19:28:29.169875   59947 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-709611
	
	I0621 19:28:29.169914   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:29.173193   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.173675   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:29.173719   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.174035   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHPort
	I0621 19:28:29.174260   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:29.174435   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:29.174606   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHUsername
	I0621 19:28:29.174774   59947 main.go:141] libmachine: Using SSH client type: native
	I0621 19:28:29.174995   59947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0621 19:28:29.175022   59947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-709611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-709611/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-709611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0621 19:28:29.279731   59947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0621 19:28:29.279759   59947 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19112-8111/.minikube CaCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19112-8111/.minikube}
	I0621 19:28:29.279775   59947 buildroot.go:174] setting up certificates
	I0621 19:28:29.279783   59947 provision.go:84] configureAuth start
	I0621 19:28:29.279791   59947 main.go:141] libmachine: (pause-709611) Calling .GetMachineName
	I0621 19:28:29.280074   59947 main.go:141] libmachine: (pause-709611) Calling .GetIP
	I0621 19:28:29.282956   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.283317   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:29.283355   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.283484   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:29.286353   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.286784   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:29.286834   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.287019   59947 provision.go:143] copyHostCerts
	I0621 19:28:29.287095   59947 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem, removing ...
	I0621 19:28:29.287112   59947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem
	I0621 19:28:29.287191   59947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/ca.pem (1082 bytes)
	I0621 19:28:29.287310   59947 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem, removing ...
	I0621 19:28:29.287323   59947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem
	I0621 19:28:29.287350   59947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/cert.pem (1123 bytes)
	I0621 19:28:29.287420   59947 exec_runner.go:144] found /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem, removing ...
	I0621 19:28:29.287432   59947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem
	I0621 19:28:29.287460   59947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19112-8111/.minikube/key.pem (1675 bytes)
	I0621 19:28:29.287520   59947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem org=jenkins.pause-709611 san=[127.0.0.1 192.168.39.75 localhost minikube pause-709611]
	I0621 19:28:29.521516   59947 provision.go:177] copyRemoteCerts
	I0621 19:28:29.521571   59947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0621 19:28:29.521592   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:29.524827   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.525194   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:29.525220   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.525384   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHPort
	I0621 19:28:29.525608   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:29.525764   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHUsername
	I0621 19:28:29.526054   59947 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/pause-709611/id_rsa Username:docker}
	I0621 19:28:29.605948   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0621 19:28:29.633148   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0621 19:28:29.658379   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0621 19:28:29.688700   59947 provision.go:87] duration metric: took 408.893406ms to configureAuth
	I0621 19:28:29.688733   59947 buildroot.go:189] setting minikube options for container-runtime
	I0621 19:28:29.688976   59947 config.go:182] Loaded profile config "pause-709611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:28:29.689067   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:29.692398   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.692798   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:29.692831   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:29.693032   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHPort
	I0621 19:28:29.693226   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:29.693399   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:29.693551   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHUsername
	I0621 19:28:29.693711   59947 main.go:141] libmachine: Using SSH client type: native
	I0621 19:28:29.693939   59947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0621 19:28:29.693963   59947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0621 19:28:35.210126   59947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0621 19:28:35.210152   59947 machine.go:97] duration metric: took 6.2863606s to provisionDockerMachine
	I0621 19:28:35.210162   59947 start.go:293] postStartSetup for "pause-709611" (driver="kvm2")
	I0621 19:28:35.210171   59947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0621 19:28:35.210183   59947 main.go:141] libmachine: (pause-709611) Calling .DriverName
	I0621 19:28:35.210519   59947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0621 19:28:35.210546   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:35.214121   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.214562   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:35.214592   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.214786   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHPort
	I0621 19:28:35.214974   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:35.215170   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHUsername
	I0621 19:28:35.215279   59947 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/pause-709611/id_rsa Username:docker}
	I0621 19:28:35.291734   59947 ssh_runner.go:195] Run: cat /etc/os-release
	I0621 19:28:35.295961   59947 info.go:137] Remote host: Buildroot 2023.02.9
	I0621 19:28:35.295987   59947 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/addons for local assets ...
	I0621 19:28:35.296040   59947 filesync.go:126] Scanning /home/jenkins/minikube-integration/19112-8111/.minikube/files for local assets ...
	I0621 19:28:35.296107   59947 filesync.go:149] local asset: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem -> 153292.pem in /etc/ssl/certs
	I0621 19:28:35.296191   59947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0621 19:28:35.305257   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /etc/ssl/certs/153292.pem (1708 bytes)
	I0621 19:28:35.327480   59947 start.go:296] duration metric: took 117.307344ms for postStartSetup
	I0621 19:28:35.327520   59947 fix.go:56] duration metric: took 6.426380668s for fixHost
	I0621 19:28:35.327541   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:35.330751   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.331142   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:35.331183   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.331378   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHPort
	I0621 19:28:35.331578   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:35.331748   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:35.331881   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHUsername
	I0621 19:28:35.332020   59947 main.go:141] libmachine: Using SSH client type: native
	I0621 19:28:35.332188   59947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0621 19:28:35.332198   59947 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0621 19:28:35.430248   59947 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718998115.422103975
	
	I0621 19:28:35.430274   59947 fix.go:216] guest clock: 1718998115.422103975
	I0621 19:28:35.430281   59947 fix.go:229] Guest: 2024-06-21 19:28:35.422103975 +0000 UTC Remote: 2024-06-21 19:28:35.32752496 +0000 UTC m=+6.575079564 (delta=94.579015ms)
	I0621 19:28:35.430320   59947 fix.go:200] guest clock delta is within tolerance: 94.579015ms
	I0621 19:28:35.430327   59947 start.go:83] releasing machines lock for "pause-709611", held for 6.52920144s
	I0621 19:28:35.430354   59947 main.go:141] libmachine: (pause-709611) Calling .DriverName
	I0621 19:28:35.430637   59947 main.go:141] libmachine: (pause-709611) Calling .GetIP
	I0621 19:28:35.433579   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.433950   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:35.433982   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.434124   59947 main.go:141] libmachine: (pause-709611) Calling .DriverName
	I0621 19:28:35.434617   59947 main.go:141] libmachine: (pause-709611) Calling .DriverName
	I0621 19:28:35.434820   59947 main.go:141] libmachine: (pause-709611) Calling .DriverName
	I0621 19:28:35.434937   59947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0621 19:28:35.434982   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:35.435066   59947 ssh_runner.go:195] Run: cat /version.json
	I0621 19:28:35.435096   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHHostname
	I0621 19:28:35.437716   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.437864   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.438129   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:35.438153   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.438287   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHPort
	I0621 19:28:35.438457   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:35.438478   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:35.438485   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:35.438669   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHPort
	I0621 19:28:35.438669   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHUsername
	I0621 19:28:35.438835   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHKeyPath
	I0621 19:28:35.438842   59947 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/pause-709611/id_rsa Username:docker}
	I0621 19:28:35.438962   59947 main.go:141] libmachine: (pause-709611) Calling .GetSSHUsername
	I0621 19:28:35.439099   59947 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/pause-709611/id_rsa Username:docker}
	I0621 19:28:35.546269   59947 ssh_runner.go:195] Run: systemctl --version
	I0621 19:28:35.552372   59947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0621 19:28:35.711354   59947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0621 19:28:35.739048   59947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0621 19:28:35.739119   59947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0621 19:28:35.757688   59947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0621 19:28:35.757715   59947 start.go:494] detecting cgroup driver to use...
	I0621 19:28:35.757787   59947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0621 19:28:35.800875   59947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0621 19:28:35.884295   59947 docker.go:217] disabling cri-docker service (if available) ...
	I0621 19:28:35.884364   59947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0621 19:28:35.995372   59947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0621 19:28:36.056097   59947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0621 19:28:36.390961   59947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0621 19:28:36.635781   59947 docker.go:233] disabling docker service ...
	I0621 19:28:36.635908   59947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0621 19:28:36.663792   59947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0621 19:28:36.692229   59947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0621 19:28:36.914391   59947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0621 19:28:37.100816   59947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0621 19:28:37.123291   59947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0621 19:28:37.198235   59947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0621 19:28:37.198308   59947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:28:37.227707   59947 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0621 19:28:37.227774   59947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:28:37.250820   59947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:28:37.271203   59947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:28:37.304659   59947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0621 19:28:37.330491   59947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:28:37.352029   59947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:28:37.377534   59947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0621 19:28:37.400818   59947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0621 19:28:37.419860   59947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0621 19:28:37.437743   59947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:28:37.649471   59947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0621 19:28:48.057567   59947 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.40805421s)
	I0621 19:28:48.057609   59947 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0621 19:28:48.057664   59947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0621 19:28:48.062906   59947 start.go:562] Will wait 60s for crictl version
	I0621 19:28:48.062980   59947 ssh_runner.go:195] Run: which crictl
	I0621 19:28:48.066631   59947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0621 19:28:48.103642   59947 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0621 19:28:48.103733   59947 ssh_runner.go:195] Run: crio --version
	I0621 19:28:48.132236   59947 ssh_runner.go:195] Run: crio --version
	I0621 19:28:48.163917   59947 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0621 19:28:48.165154   59947 main.go:141] libmachine: (pause-709611) Calling .GetIP
	I0621 19:28:48.167978   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:48.168383   59947 main.go:141] libmachine: (pause-709611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fd:31", ip: ""} in network mk-pause-709611: {Iface:virbr1 ExpiryTime:2024-06-21 20:27:09 +0000 UTC Type:0 Mac:52:54:00:67:fd:31 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:pause-709611 Clientid:01:52:54:00:67:fd:31}
	I0621 19:28:48.168411   59947 main.go:141] libmachine: (pause-709611) DBG | domain pause-709611 has defined IP address 192.168.39.75 and MAC address 52:54:00:67:fd:31 in network mk-pause-709611
	I0621 19:28:48.168587   59947 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0621 19:28:48.172769   59947 kubeadm.go:877] updating cluster {Name:pause-709611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:pause-709611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0621 19:28:48.172891   59947 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 19:28:48.172926   59947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 19:28:48.211362   59947 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 19:28:48.211384   59947 crio.go:433] Images already preloaded, skipping extraction
	I0621 19:28:48.211427   59947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0621 19:28:48.245878   59947 crio.go:514] all images are preloaded for cri-o runtime.
	I0621 19:28:48.245902   59947 cache_images.go:84] Images are preloaded, skipping loading
	I0621 19:28:48.245909   59947 kubeadm.go:928] updating node { 192.168.39.75 8443 v1.30.2 crio true true} ...
	I0621 19:28:48.246009   59947 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-709611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:pause-709611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0621 19:28:48.246070   59947 ssh_runner.go:195] Run: crio config
	I0621 19:28:48.296236   59947 cni.go:84] Creating CNI manager for ""
	I0621 19:28:48.296255   59947 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 19:28:48.296264   59947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0621 19:28:48.296282   59947 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.75 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-709611 NodeName:pause-709611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0621 19:28:48.296408   59947 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.75
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-709611"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.75
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.75"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0621 19:28:48.296463   59947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0621 19:28:48.306587   59947 binaries.go:44] Found k8s binaries, skipping transfer
	I0621 19:28:48.306661   59947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0621 19:28:48.315685   59947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0621 19:28:48.331273   59947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0621 19:28:48.347481   59947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0621 19:28:48.363674   59947 ssh_runner.go:195] Run: grep 192.168.39.75	control-plane.minikube.internal$ /etc/hosts
	I0621 19:28:48.367653   59947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:28:48.512619   59947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0621 19:28:48.526165   59947 certs.go:68] Setting up /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/pause-709611 for IP: 192.168.39.75
	I0621 19:28:48.526187   59947 certs.go:194] generating shared ca certs ...
	I0621 19:28:48.526208   59947 certs.go:226] acquiring lock for ca certs: {Name:mk96df7d45efa699c355b4c4409471361aa3f418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:28:48.526362   59947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key
	I0621 19:28:48.526402   59947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key
	I0621 19:28:48.526411   59947 certs.go:256] generating profile certs ...
	I0621 19:28:48.526480   59947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/pause-709611/client.key
	I0621 19:28:48.526536   59947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/pause-709611/apiserver.key.9f6faac4
	I0621 19:28:48.526569   59947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/pause-709611/proxy-client.key
	I0621 19:28:48.526675   59947 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem (1338 bytes)
	W0621 19:28:48.526700   59947 certs.go:480] ignoring /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329_empty.pem, impossibly tiny 0 bytes
	I0621 19:28:48.526709   59947 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca-key.pem (1675 bytes)
	I0621 19:28:48.526732   59947 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/ca.pem (1082 bytes)
	I0621 19:28:48.526754   59947 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/cert.pem (1123 bytes)
	I0621 19:28:48.526776   59947 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/certs/key.pem (1675 bytes)
	I0621 19:28:48.526824   59947 certs.go:484] found cert: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem (1708 bytes)
	I0621 19:28:48.527423   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0621 19:28:48.551022   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0621 19:28:48.574066   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0621 19:28:48.595832   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0621 19:28:48.618024   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/pause-709611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0621 19:28:48.641071   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/pause-709611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0621 19:28:48.665457   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/pause-709611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0621 19:28:48.688163   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/pause-709611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0621 19:28:48.711478   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/certs/15329.pem --> /usr/share/ca-certificates/15329.pem (1338 bytes)
	I0621 19:28:48.733457   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/ssl/certs/153292.pem --> /usr/share/ca-certificates/153292.pem (1708 bytes)
	I0621 19:28:48.754894   59947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0621 19:28:48.776594   59947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0621 19:28:48.791745   59947 ssh_runner.go:195] Run: openssl version
	I0621 19:28:48.797581   59947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15329.pem && ln -fs /usr/share/ca-certificates/15329.pem /etc/ssl/certs/15329.pem"
	I0621 19:28:48.807828   59947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15329.pem
	I0621 19:28:48.811934   59947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 21 18:22 /usr/share/ca-certificates/15329.pem
	I0621 19:28:48.811985   59947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15329.pem
	I0621 19:28:48.817502   59947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15329.pem /etc/ssl/certs/51391683.0"
	I0621 19:28:48.826754   59947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153292.pem && ln -fs /usr/share/ca-certificates/153292.pem /etc/ssl/certs/153292.pem"
	I0621 19:28:48.853251   59947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153292.pem
	I0621 19:28:48.865980   59947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 21 18:22 /usr/share/ca-certificates/153292.pem
	I0621 19:28:48.866048   59947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153292.pem
	I0621 19:28:48.876738   59947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153292.pem /etc/ssl/certs/3ec20f2e.0"
	I0621 19:28:48.935512   59947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0621 19:28:49.039598   59947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:28:49.078703   59947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 21 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:28:49.078777   59947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0621 19:28:49.122105   59947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0621 19:28:49.219390   59947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0621 19:28:49.225326   59947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0621 19:28:49.254160   59947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0621 19:28:49.264417   59947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0621 19:28:49.281351   59947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0621 19:28:49.293788   59947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0621 19:28:49.299514   59947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0621 19:28:49.309431   59947 kubeadm.go:391] StartCluster: {Name:pause-709611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:pause-709611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:28:49.309550   59947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0621 19:28:49.309614   59947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0621 19:28:49.356881   59947 cri.go:89] found id: "a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3"
	I0621 19:28:49.356906   59947 cri.go:89] found id: "04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e"
	I0621 19:28:49.356912   59947 cri.go:89] found id: "d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf"
	I0621 19:28:49.356917   59947 cri.go:89] found id: "bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43"
	I0621 19:28:49.356921   59947 cri.go:89] found id: "8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6"
	I0621 19:28:49.356926   59947 cri.go:89] found id: "5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788"
	I0621 19:28:49.356930   59947 cri.go:89] found id: ""
	I0621 19:28:49.356980   59947 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-709611 -n pause-709611
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-709611 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-709611 logs -n 25: (1.571634054s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-843358             | cert-expiration-843358    | jenkins | v1.33.1 | 21 Jun 24 19:24 UTC | 21 Jun 24 19:25 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-313770             | running-upgrade-313770    | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:26 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:25 UTC |
	| start   | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:26 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-352820 ssh cat     | force-systemd-flag-352820 | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:25 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-352820          | force-systemd-flag-352820 | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:25 UTC |
	| start   | -p cert-options-912751                | cert-options-912751       | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:26 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-262372 sudo           | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	| start   | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-313770             | running-upgrade-313770    | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	| start   | -p pause-709611 --memory=2048         | pause-709611              | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:28 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-912751 ssh               | cert-options-912751       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-912751 -- sudo        | cert-options-912751       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-912751                | cert-options-912751       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	| start   | -p kubernetes-upgrade-371786          | kubernetes-upgrade-371786 | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-262372 sudo           | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	| start   | -p stopped-upgrade-693942             | minikube                  | jenkins | v1.26.0 | 21 Jun 24 19:27 UTC | 21 Jun 24 19:28 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p pause-709611                       | pause-709611              | jenkins | v1.33.1 | 21 Jun 24 19:28 UTC | 21 Jun 24 19:29 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-693942 stop           | minikube                  | jenkins | v1.26.0 | 21 Jun 24 19:28 UTC | 21 Jun 24 19:28 UTC |
	| start   | -p stopped-upgrade-693942             | stopped-upgrade-693942    | jenkins | v1.33.1 | 21 Jun 24 19:28 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-843358             | cert-expiration-843358    | jenkins | v1.33.1 | 21 Jun 24 19:28 UTC | 21 Jun 24 19:29 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-843358             | cert-expiration-843358    | jenkins | v1.33.1 | 21 Jun 24 19:29 UTC | 21 Jun 24 19:29 UTC |
	| start   | -p auto-313995 --memory=3072          | auto-313995               | jenkins | v1.33.1 | 21 Jun 24 19:29 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 19:29:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 19:29:13.695939   60543 out.go:291] Setting OutFile to fd 1 ...
	I0621 19:29:13.696199   60543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:29:13.696207   60543 out.go:304] Setting ErrFile to fd 2...
	I0621 19:29:13.696211   60543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:29:13.696379   60543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 19:29:13.696909   60543 out.go:298] Setting JSON to false
	I0621 19:29:13.697896   60543 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7852,"bootTime":1718990302,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 19:29:13.697953   60543 start.go:139] virtualization: kvm guest
	I0621 19:29:13.700393   60543 out.go:177] * [auto-313995] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 19:29:13.701877   60543 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 19:29:13.701877   60543 notify.go:220] Checking for updates...
	I0621 19:29:13.703187   60543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 19:29:13.704630   60543 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 19:29:13.706029   60543 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 19:29:13.707397   60543 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 19:29:13.708528   60543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 19:29:13.710435   60543 config.go:182] Loaded profile config "kubernetes-upgrade-371786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0621 19:29:13.710614   60543 config.go:182] Loaded profile config "pause-709611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:29:13.710734   60543 config.go:182] Loaded profile config "stopped-upgrade-693942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0621 19:29:13.710853   60543 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 19:29:13.754216   60543 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 19:29:13.755460   60543 start.go:297] selected driver: kvm2
	I0621 19:29:13.755477   60543 start.go:901] validating driver "kvm2" against <nil>
	I0621 19:29:13.755486   60543 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 19:29:13.756258   60543 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:29:13.756320   60543 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 19:29:13.772713   60543 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 19:29:13.772774   60543 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 19:29:13.772993   60543 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 19:29:13.773059   60543 cni.go:84] Creating CNI manager for ""
	I0621 19:29:13.773075   60543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 19:29:13.773087   60543 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0621 19:29:13.773157   60543 start.go:340] cluster config:
	{Name:auto-313995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:auto-313995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:29:13.773262   60543 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:29:13.775957   60543 out.go:177] * Starting "auto-313995" primary control-plane node in "auto-313995" cluster
	I0621 19:29:13.777112   60543 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 19:29:13.777154   60543 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 19:29:13.777166   60543 cache.go:56] Caching tarball of preloaded images
	I0621 19:29:13.777259   60543 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 19:29:13.777273   60543 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 19:29:13.777361   60543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/auto-313995/config.json ...
	I0621 19:29:13.777379   60543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/auto-313995/config.json: {Name:mkc54a2a989ad9625979ec901377d998419a191d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:29:13.777524   60543 start.go:360] acquireMachinesLock for auto-313995: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 19:29:13.777556   60543 start.go:364] duration metric: took 17.328µs to acquireMachinesLock for "auto-313995"
	I0621 19:29:13.777579   60543 start.go:93] Provisioning new machine with config: &{Name:auto-313995 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:auto-313995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 19:29:13.777660   60543 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.200755226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15f86a3c-fd58-4114-9659-974ee9a32c2a name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.202103533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31ec3ce5-8a2b-4902-84ed-ead9fea6baa5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.202454206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998154202431175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31ec3ce5-8a2b-4902-84ed-ead9fea6baa5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.203119552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6cbe353-f3a2-46a6-8e09-d06275ea0784 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.203186731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6cbe353-f3a2-46a6-8e09-d06275ea0784 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.203471642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b,PodSandboxId:9d997d0b0226ea5bd74be40bb5858c442c955a811c1f2c07aaaa944be4d5ac5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718998135309913283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457,PodSandboxId:5ff8679648f946ddb09902c8738966bbd9277f4f27b66de776e6e6676174d937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718998135302517585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3,PodSandboxId:6064702d2d887d6da40b4b5426e2300f5fe221fd546c893e6ded0b2ea0dcd24c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998131486871343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b
59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58,PodSandboxId:91d24172b47844ef1594faf9c6d935a22720a7e58b5dcc65671e1630e307f4bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998131458177079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae4942
5b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9,PodSandboxId:a994b2ee75d6df6fb3e52852c36d7799a01ead686cba358c365f328cb9a22ffd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998131491226677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernete
s.container.hash: 73210495,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e,PodSandboxId:6a748f289957adab0a3817216368f70baa12cfce42ac13d72c8460b27a11cc59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998131470819049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3,PodSandboxId:4d007334719fd50361661fd8ccb1ce081a0a1b1a7d70b93e6262a7f982e980dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718998117173089929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c
367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e,PodSandboxId:b4cc3efa63d4546090c14e773a3ed5b377ab3e34e18be7296e9095df96a60dbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718998116373264542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf,PodSandboxId:dc00428f46239543da292e5ed36fc55c7f0e97bb79602bce648cf0bac08f7afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998116255151258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43,PodSandboxId:0c77fb3b14b7a14822094758e0da022ee9ade4e9c50a4072251e8613966010ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998116148812922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 73210495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6,PodSandboxId:a429d189b269b6ddc79916f44e9c624e0a7eac9aec2de5de30c726f5d6763522,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998116120768984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 484e9c7a7b59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788,PodSandboxId:50d831970405c5c958be73452b99daa51a0ee3695ea1a0783fe9e0427e104887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998116029023213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6cbe353-f3a2-46a6-8e09-d06275ea0784 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.228156113Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=cf82b242-3916-4d0f-b2c9-32db22db3eaf name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.228455357Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9d997d0b0226ea5bd74be40bb5858c442c955a811c1f2c07aaaa944be4d5ac5c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-s4tzq,Uid:89899309-4a41-4043-b917-9d05815d0a40,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718998129196286131,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T19:27:47.111563175Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ff8679648f946ddb09902c8738966bbd9277f4f27b66de776e6e6676174d937,Metadata:&PodSandboxMetadata{Name:kube-proxy-5gg8h,Uid:f4afccdb-9436-419e-812f-5d1b8a9eba53,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1718998129023446059,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T19:27:46.653775213Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6a748f289957adab0a3817216368f70baa12cfce42ac13d72c8460b27a11cc59,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-709611,Uid:ba8d208ac65513db97b663391a68f6c9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718998128953366509,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: ba8d208ac65513db97b663391a68f6c9,kubernetes.io/config.seen: 2024-06-21T19:27:33.359865781Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6064702d2d887d6da40b4b5426e2300f5fe221fd546c893e6ded0b2ea0dcd24c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-709611,Uid:484e9c7a7b59f963f1910971c476884a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718998128936083118,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b59f963f1910971c476884a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.75:8443,kubernetes.io/config.hash: 484e9c7a7b59f963f1910971c476884a,kubernetes.io/config.seen: 2024-06-21T19:27:33.359863793Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{
Id:91d24172b47844ef1594faf9c6d935a22720a7e58b5dcc65671e1630e307f4bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-709611,Uid:d7a3ddf226b2d6c1a4a645ae49425b2d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718998128921980050,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d7a3ddf226b2d6c1a4a645ae49425b2d,kubernetes.io/config.seen: 2024-06-21T19:27:33.359857580Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a994b2ee75d6df6fb3e52852c36d7799a01ead686cba358c365f328cb9a22ffd,Metadata:&PodSandboxMetadata{Name:etcd-pause-709611,Uid:195415512a07a028b784af173ab67f1b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718998128913823529,Labels:map[string]string{component: etcd,io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.75:2379,kubernetes.io/config.hash: 195415512a07a028b784af173ab67f1b,kubernetes.io/config.seen: 2024-06-21T19:27:33.359862016Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4d007334719fd50361661fd8ccb1ce081a0a1b1a7d70b93e6262a7f982e980dc,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-s4tzq,Uid:89899309-4a41-4043-b917-9d05815d0a40,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718998115927673420,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/c
onfig.seen: 2024-06-21T19:27:47.111563175Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a429d189b269b6ddc79916f44e9c624e0a7eac9aec2de5de30c726f5d6763522,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-709611,Uid:484e9c7a7b59f963f1910971c476884a,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718998115787294677,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b59f963f1910971c476884a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.75:8443,kubernetes.io/config.hash: 484e9c7a7b59f963f1910971c476884a,kubernetes.io/config.seen: 2024-06-21T19:27:33.359863793Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0c77fb3b14b7a14822094758e0da022ee9ade4e9c50a4072251e8613966010ed,Metadata:&PodSandboxMetadata{Name:etcd-p
ause-709611,Uid:195415512a07a028b784af173ab67f1b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718998115779443662,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.75:2379,kubernetes.io/config.hash: 195415512a07a028b784af173ab67f1b,kubernetes.io/config.seen: 2024-06-21T19:27:33.359862016Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dc00428f46239543da292e5ed36fc55c7f0e97bb79602bce648cf0bac08f7afe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-709611,Uid:d7a3ddf226b2d6c1a4a645ae49425b2d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718998115771424752,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name
: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d7a3ddf226b2d6c1a4a645ae49425b2d,kubernetes.io/config.seen: 2024-06-21T19:27:33.359857580Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b4cc3efa63d4546090c14e773a3ed5b377ab3e34e18be7296e9095df96a60dbb,Metadata:&PodSandboxMetadata{Name:kube-proxy-5gg8h,Uid:f4afccdb-9436-419e-812f-5d1b8a9eba53,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718998115726791141,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T19:27:46.653775213Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:50d831970405c5c958be73452b99daa51a0ee3695ea1a0783fe9e0427e104887,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-709611,Uid:ba8d208ac65513db97b663391a68f6c9,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718998115718247505,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ba8d208ac65513db97b663391a68f6c9,kubernetes.io/config.seen: 2024-06-21T19:27:33.359865781Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a89d735a6dafb3192bc1e9b884c4ac3ebb0585e403cc3402bc3f5aed976f81b9,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-lsk6w,Uid:0b6d9177-d031-4247-8684-8cdc26cbb76c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:171899806740
4028070,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-lsk6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b6d9177-d031-4247-8684-8cdc26cbb76c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-21T19:27:47.088560183Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=cf82b242-3916-4d0f-b2c9-32db22db3eaf name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.229713830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06ef4a5d-1ec4-4ec7-9d00-1b75cf63d257 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.229829944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06ef4a5d-1ec4-4ec7-9d00-1b75cf63d257 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.230811486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b,PodSandboxId:9d997d0b0226ea5bd74be40bb5858c442c955a811c1f2c07aaaa944be4d5ac5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718998135309913283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457,PodSandboxId:5ff8679648f946ddb09902c8738966bbd9277f4f27b66de776e6e6676174d937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718998135302517585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3,PodSandboxId:6064702d2d887d6da40b4b5426e2300f5fe221fd546c893e6ded0b2ea0dcd24c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998131486871343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b
59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58,PodSandboxId:91d24172b47844ef1594faf9c6d935a22720a7e58b5dcc65671e1630e307f4bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998131458177079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae4942
5b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9,PodSandboxId:a994b2ee75d6df6fb3e52852c36d7799a01ead686cba358c365f328cb9a22ffd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998131491226677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernete
s.container.hash: 73210495,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e,PodSandboxId:6a748f289957adab0a3817216368f70baa12cfce42ac13d72c8460b27a11cc59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998131470819049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3,PodSandboxId:4d007334719fd50361661fd8ccb1ce081a0a1b1a7d70b93e6262a7f982e980dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718998117173089929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c
367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e,PodSandboxId:b4cc3efa63d4546090c14e773a3ed5b377ab3e34e18be7296e9095df96a60dbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718998116373264542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf,PodSandboxId:dc00428f46239543da292e5ed36fc55c7f0e97bb79602bce648cf0bac08f7afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998116255151258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43,PodSandboxId:0c77fb3b14b7a14822094758e0da022ee9ade4e9c50a4072251e8613966010ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998116148812922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 73210495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6,PodSandboxId:a429d189b269b6ddc79916f44e9c624e0a7eac9aec2de5de30c726f5d6763522,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998116120768984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 484e9c7a7b59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788,PodSandboxId:50d831970405c5c958be73452b99daa51a0ee3695ea1a0783fe9e0427e104887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998116029023213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06ef4a5d-1ec4-4ec7-9d00-1b75cf63d257 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.253984399Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a074043-1b78-41a4-aa8d-4ce366c8b287 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.254074438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a074043-1b78-41a4-aa8d-4ce366c8b287 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.255405095Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9957a8d0-f5e6-4826-b20e-43b09ae6b9fe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.255887338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998154255862720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9957a8d0-f5e6-4826-b20e-43b09ae6b9fe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.256698482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9651b8f0-8665-4c75-81bc-75968f15edc4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.256775045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9651b8f0-8665-4c75-81bc-75968f15edc4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.257030244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b,PodSandboxId:9d997d0b0226ea5bd74be40bb5858c442c955a811c1f2c07aaaa944be4d5ac5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718998135309913283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457,PodSandboxId:5ff8679648f946ddb09902c8738966bbd9277f4f27b66de776e6e6676174d937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718998135302517585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3,PodSandboxId:6064702d2d887d6da40b4b5426e2300f5fe221fd546c893e6ded0b2ea0dcd24c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998131486871343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b
59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58,PodSandboxId:91d24172b47844ef1594faf9c6d935a22720a7e58b5dcc65671e1630e307f4bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998131458177079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae4942
5b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9,PodSandboxId:a994b2ee75d6df6fb3e52852c36d7799a01ead686cba358c365f328cb9a22ffd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998131491226677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernete
s.container.hash: 73210495,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e,PodSandboxId:6a748f289957adab0a3817216368f70baa12cfce42ac13d72c8460b27a11cc59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998131470819049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3,PodSandboxId:4d007334719fd50361661fd8ccb1ce081a0a1b1a7d70b93e6262a7f982e980dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718998117173089929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c
367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e,PodSandboxId:b4cc3efa63d4546090c14e773a3ed5b377ab3e34e18be7296e9095df96a60dbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718998116373264542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf,PodSandboxId:dc00428f46239543da292e5ed36fc55c7f0e97bb79602bce648cf0bac08f7afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998116255151258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43,PodSandboxId:0c77fb3b14b7a14822094758e0da022ee9ade4e9c50a4072251e8613966010ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998116148812922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 73210495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6,PodSandboxId:a429d189b269b6ddc79916f44e9c624e0a7eac9aec2de5de30c726f5d6763522,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998116120768984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 484e9c7a7b59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788,PodSandboxId:50d831970405c5c958be73452b99daa51a0ee3695ea1a0783fe9e0427e104887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998116029023213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9651b8f0-8665-4c75-81bc-75968f15edc4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.306861162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6581ace5-9293-49c4-a401-dcf63fc109f5 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.306972977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6581ace5-9293-49c4-a401-dcf63fc109f5 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.308504053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a636645-ea89-4afe-8fad-713f481e052a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.309237559Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998154309200289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a636645-ea89-4afe-8fad-713f481e052a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.309917362Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86973b43-4f7f-4323-a944-9deee72ffdf0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.310013022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86973b43-4f7f-4323-a944-9deee72ffdf0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:14 pause-709611 crio[2965]: time="2024-06-21 19:29:14.310838582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b,PodSandboxId:9d997d0b0226ea5bd74be40bb5858c442c955a811c1f2c07aaaa944be4d5ac5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718998135309913283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457,PodSandboxId:5ff8679648f946ddb09902c8738966bbd9277f4f27b66de776e6e6676174d937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718998135302517585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3,PodSandboxId:6064702d2d887d6da40b4b5426e2300f5fe221fd546c893e6ded0b2ea0dcd24c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998131486871343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b
59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58,PodSandboxId:91d24172b47844ef1594faf9c6d935a22720a7e58b5dcc65671e1630e307f4bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998131458177079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae4942
5b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9,PodSandboxId:a994b2ee75d6df6fb3e52852c36d7799a01ead686cba358c365f328cb9a22ffd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998131491226677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernete
s.container.hash: 73210495,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e,PodSandboxId:6a748f289957adab0a3817216368f70baa12cfce42ac13d72c8460b27a11cc59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998131470819049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3,PodSandboxId:4d007334719fd50361661fd8ccb1ce081a0a1b1a7d70b93e6262a7f982e980dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718998117173089929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c
367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e,PodSandboxId:b4cc3efa63d4546090c14e773a3ed5b377ab3e34e18be7296e9095df96a60dbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718998116373264542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf,PodSandboxId:dc00428f46239543da292e5ed36fc55c7f0e97bb79602bce648cf0bac08f7afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998116255151258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43,PodSandboxId:0c77fb3b14b7a14822094758e0da022ee9ade4e9c50a4072251e8613966010ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998116148812922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 73210495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6,PodSandboxId:a429d189b269b6ddc79916f44e9c624e0a7eac9aec2de5de30c726f5d6763522,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998116120768984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 484e9c7a7b59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788,PodSandboxId:50d831970405c5c958be73452b99daa51a0ee3695ea1a0783fe9e0427e104887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998116029023213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86973b43-4f7f-4323-a944-9deee72ffdf0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	201446faeb719       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   9d997d0b0226e       coredns-7db6d8ff4d-s4tzq
	fad8c5ff81c08       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   19 seconds ago      Running             kube-proxy                2                   5ff8679648f94       kube-proxy-5gg8h
	9bd43fc1dab57       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   22 seconds ago      Running             etcd                      2                   a994b2ee75d6d       etcd-pause-709611
	f69e4f10489b5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   22 seconds ago      Running             kube-apiserver            2                   6064702d2d887       kube-apiserver-pause-709611
	fb757a14e8f33       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   22 seconds ago      Running             kube-controller-manager   2                   6a748f289957a       kube-controller-manager-pause-709611
	f934a8c3fef0a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   22 seconds ago      Running             kube-scheduler            2                   91d24172b4784       kube-scheduler-pause-709611
	a30403507079b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   37 seconds ago      Exited              coredns                   1                   4d007334719fd       coredns-7db6d8ff4d-s4tzq
	04e1ab86d0e48       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   38 seconds ago      Exited              kube-proxy                1                   b4cc3efa63d45       kube-proxy-5gg8h
	d01011176eab3       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   38 seconds ago      Exited              kube-scheduler            1                   dc00428f46239       kube-scheduler-pause-709611
	bc193ffb133fd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   38 seconds ago      Exited              etcd                      1                   0c77fb3b14b7a       etcd-pause-709611
	8baf767038d56       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   38 seconds ago      Exited              kube-apiserver            1                   a429d189b269b       kube-apiserver-pause-709611
	5cfc092ef2389       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   38 seconds ago      Exited              kube-controller-manager   1                   50d831970405c       kube-controller-manager-pause-709611
	
	
	==> coredns [201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40363 - 37112 "HINFO IN 2689991368494369451.6408760700095078131. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013030005s
	
	
	==> coredns [a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:52441 - 1049 "HINFO IN 6068307658687501304.8304913484256281873. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010480563s
	
	
	==> describe nodes <==
	Name:               pause-709611
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-709611
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=pause-709611
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T19_27_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 19:27:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-709611
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 19:29:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 19:28:54 +0000   Fri, 21 Jun 2024 19:27:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 19:28:54 +0000   Fri, 21 Jun 2024 19:27:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 19:28:54 +0000   Fri, 21 Jun 2024 19:27:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 19:28:54 +0000   Fri, 21 Jun 2024 19:27:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.75
	  Hostname:    pause-709611
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 13635eec7a2e4d85ade6317c56bfbfae
	  System UUID:                13635eec-7a2e-4d85-ade6-317c56bfbfae
	  Boot ID:                    2beaf04d-2f9a-4e10-99b9-552f6bf9ae69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-s4tzq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                 etcd-pause-709611                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         101s
	  kube-system                 kube-apiserver-pause-709611             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-controller-manager-pause-709611    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-5gg8h                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-pause-709611             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 84s                  kube-proxy       
	  Normal  Starting                 19s                  kube-proxy       
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node pause-709611 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node pause-709611 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)  kubelet          Node pause-709611 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    101s                 kubelet          Node pause-709611 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  101s                 kubelet          Node pause-709611 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     101s                 kubelet          Node pause-709611 status is now: NodeHasSufficientPID
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeReady                100s                 kubelet          Node pause-709611 status is now: NodeReady
	  Normal  RegisteredNode           88s                  node-controller  Node pause-709611 event: Registered Node pause-709611 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node pause-709611 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node pause-709611 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node pause-709611 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                   node-controller  Node pause-709611 event: Registered Node pause-709611 in Controller
	
	
	==> dmesg <==
	[  +6.944707] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.064221] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061192] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.212047] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.138377] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.284987] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.181155] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.492451] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.059229] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.996926] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.095634] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.520645] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.738208] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[ +12.033391] kauditd_printk_skb: 89 callbacks suppressed
	[Jun21 19:28] systemd-fstab-generator[2633]: Ignoring "noauto" option for root device
	[  +0.240394] systemd-fstab-generator[2716]: Ignoring "noauto" option for root device
	[  +0.307149] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +0.205191] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[  +0.532470] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[ +10.902390] systemd-fstab-generator[3211]: Ignoring "noauto" option for root device
	[  +0.089151] kauditd_printk_skb: 173 callbacks suppressed
	[  +2.239814] systemd-fstab-generator[3654]: Ignoring "noauto" option for root device
	[  +4.664391] kauditd_printk_skb: 109 callbacks suppressed
	[Jun21 19:29] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.639988] systemd-fstab-generator[4095]: Ignoring "noauto" option for root device
	
	
	==> etcd [9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9] <==
	{"level":"info","ts":"2024-06-21T19:28:52.333663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:28:52.335216Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T19:28:52.337779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.75:2379"}
	{"level":"info","ts":"2024-06-21T19:28:52.344041Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T19:28:52.344656Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-06-21T19:28:59.521123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.681591ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15851140427028917395 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb145cf0d06\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb145cf0d06\" value_size:544 lease:6627768390174141585 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:28:59.521461Z","caller":"traceutil/trace.go:171","msg":"trace[959245181] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"383.895736ms","start":"2024-06-21T19:28:59.137543Z","end":"2024-06-21T19:28:59.521439Z","steps":["trace[959245181] 'process raft request'  (duration: 127.586901ms)","trace[959245181] 'compare'  (duration: 255.536522ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:28:59.521681Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:28:59.137527Z","time spent":"384.120232ms","remote":"127.0.0.1:44956","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":616,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb145cf0d06\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb145cf0d06\" value_size:544 lease:6627768390174141585 >> failure:<>"}
	{"level":"warn","ts":"2024-06-21T19:29:00.157395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.149942ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15851140427028917399 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb14b1d76ab\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb14b1d76ab\" value_size:594 lease:6627768390174141585 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:29:00.157659Z","caller":"traceutil/trace.go:171","msg":"trace[1645743485] linearizableReadLoop","detail":"{readStateIndex:505; appliedIndex:504; }","duration":"231.867622ms","start":"2024-06-21T19:28:59.925774Z","end":"2024-06-21T19:29:00.157642Z","steps":["trace[1645743485] 'read index received'  (duration: 38.34µs)","trace[1645743485] 'applied index is now lower than readState.Index'  (duration: 231.826975ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T19:29:00.157672Z","caller":"traceutil/trace.go:171","msg":"trace[2087919787] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"624.078725ms","start":"2024-06-21T19:28:59.533571Z","end":"2024-06-21T19:29:00.15765Z","steps":["trace[2087919787] 'process raft request'  (duration: 247.628289ms)","trace[2087919787] 'compare'  (duration: 376.011319ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:29:00.157858Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.119607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" ","response":"range_response_count:1 size:5020"}
	{"level":"info","ts":"2024-06-21T19:29:00.157898Z","caller":"traceutil/trace.go:171","msg":"trace[1920827187] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq; range_end:; response_count:1; response_revision:465; }","duration":"162.179917ms","start":"2024-06-21T19:28:59.995711Z","end":"2024-06-21T19:29:00.157891Z","steps":["trace[1920827187] 'agreement among raft nodes before linearized reading'  (duration: 162.126734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:29:00.15785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:28:59.533559Z","time spent":"624.252919ms","remote":"127.0.0.1:44956","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":666,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb14b1d76ab\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb14b1d76ab\" value_size:594 lease:6627768390174141585 >> failure:<>"}
	{"level":"warn","ts":"2024-06-21T19:29:00.157785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.039154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" ","response":"range_response_count:1 size:5020"}
	{"level":"info","ts":"2024-06-21T19:29:00.159178Z","caller":"traceutil/trace.go:171","msg":"trace[1285980518] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq; range_end:; response_count:1; response_revision:465; }","duration":"233.456326ms","start":"2024-06-21T19:28:59.925709Z","end":"2024-06-21T19:29:00.159165Z","steps":["trace[1285980518] 'agreement among raft nodes before linearized reading'  (duration: 231.989815ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:29:00.572024Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.242757ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15851140427028917403 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb14b1d957b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb14b1d957b\" value_size:592 lease:6627768390174141585 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:29:00.572205Z","caller":"traceutil/trace.go:171","msg":"trace[1699236452] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:505; }","duration":"145.766115ms","start":"2024-06-21T19:29:00.426429Z","end":"2024-06-21T19:29:00.572195Z","steps":["trace[1699236452] 'read index received'  (duration: 145.486662ms)","trace[1699236452] 'applied index is now lower than readState.Index'  (duration: 278.894µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:29:00.572335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.939853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" ","response":"range_response_count:1 size:4842"}
	{"level":"info","ts":"2024-06-21T19:29:00.572383Z","caller":"traceutil/trace.go:171","msg":"trace[1400233159] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq; range_end:; response_count:1; response_revision:467; }","duration":"145.994272ms","start":"2024-06-21T19:29:00.426381Z","end":"2024-06-21T19:29:00.572375Z","steps":["trace[1400233159] 'agreement among raft nodes before linearized reading'  (duration: 145.897611ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T19:29:00.572487Z","caller":"traceutil/trace.go:171","msg":"trace[1560048698] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"407.088906ms","start":"2024-06-21T19:29:00.165393Z","end":"2024-06-21T19:29:00.572482Z","steps":["trace[1560048698] 'process raft request'  (duration: 406.754254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:29:00.572726Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:29:00.16538Z","time spent":"407.133153ms","remote":"127.0.0.1:45078","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4827,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" mod_revision:459 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" value_size:4768 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" > >"}
	{"level":"info","ts":"2024-06-21T19:29:00.572809Z","caller":"traceutil/trace.go:171","msg":"trace[1712061501] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"408.662061ms","start":"2024-06-21T19:29:00.16414Z","end":"2024-06-21T19:29:00.572802Z","steps":["trace[1712061501] 'process raft request'  (duration: 126.604039ms)","trace[1712061501] 'compare'  (duration: 281.15042ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:29:00.572854Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:29:00.164127Z","time spent":"408.710236ms","remote":"127.0.0.1:44956","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb14b1d957b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb14b1d957b\" value_size:592 lease:6627768390174141585 >> failure:<>"}
	{"level":"info","ts":"2024-06-21T19:29:00.842935Z","caller":"traceutil/trace.go:171","msg":"trace[1434030198] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"169.503602ms","start":"2024-06-21T19:29:00.673408Z","end":"2024-06-21T19:29:00.842912Z","steps":["trace[1434030198] 'process raft request'  (duration: 76.575581ms)","trace[1434030198] 'compare'  (duration: 92.808797ms)"],"step_count":2}
	
	
	==> etcd [bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43] <==
	{"level":"info","ts":"2024-06-21T19:28:36.975407Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"66.029033ms"}
	{"level":"info","ts":"2024-06-21T19:28:37.005464Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-21T19:28:37.055905Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"63c722bb74796bf1","local-member-id":"b968d2e8d13dbfa","commit-index":469}
	{"level":"info","ts":"2024-06-21T19:28:37.056019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b968d2e8d13dbfa switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-21T19:28:37.056052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b968d2e8d13dbfa became follower at term 2"}
	{"level":"info","ts":"2024-06-21T19:28:37.056082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b968d2e8d13dbfa [peers: [], term: 2, commit: 469, applied: 0, lastindex: 469, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-21T19:28:37.059724Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-06-21T19:28:37.123162Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":446}
	{"level":"info","ts":"2024-06-21T19:28:37.130251Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-06-21T19:28:37.142744Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b968d2e8d13dbfa","timeout":"7s"}
	{"level":"info","ts":"2024-06-21T19:28:37.148801Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b968d2e8d13dbfa"}
	{"level":"info","ts":"2024-06-21T19:28:37.149174Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b968d2e8d13dbfa","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-06-21T19:28:37.158873Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-21T19:28:37.159704Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T19:28:37.16124Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T19:28:37.161258Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T19:28:37.161901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b968d2e8d13dbfa switched to configuration voters=(835010011998706682)"}
	{"level":"info","ts":"2024-06-21T19:28:37.162492Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"63c722bb74796bf1","local-member-id":"b968d2e8d13dbfa","added-peer-id":"b968d2e8d13dbfa","added-peer-peer-urls":["https://192.168.39.75:2380"]}
	{"level":"info","ts":"2024-06-21T19:28:37.162242Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-21T19:28:37.162791Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"63c722bb74796bf1","local-member-id":"b968d2e8d13dbfa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T19:28:37.162958Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b968d2e8d13dbfa","initial-advertise-peer-urls":["https://192.168.39.75:2380"],"listen-peer-urls":["https://192.168.39.75:2380"],"advertise-client-urls":["https://192.168.39.75:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.75:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-21T19:28:37.163023Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-21T19:28:37.162263Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.75:2380"}
	{"level":"info","ts":"2024-06-21T19:28:37.163054Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.75:2380"}
	{"level":"info","ts":"2024-06-21T19:28:37.162988Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:29:14 up 2 min,  0 users,  load average: 0.82, 0.41, 0.16
	Linux pause-709611 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6] <==
	I0621 19:28:36.866747       1 options.go:221] external host was not specified, using 192.168.39.75
	I0621 19:28:36.869848       1 server.go:148] Version: v1.30.2
	I0621 19:28:36.869892       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3] <==
	I0621 19:28:54.450142       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0621 19:28:54.450165       1 cache.go:39] Caches are synced for autoregister controller
	I0621 19:28:54.450377       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0621 19:28:54.450694       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 19:28:54.450769       1 policy_source.go:224] refreshing policies
	I0621 19:28:54.454075       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 19:28:54.528827       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0621 19:28:54.532100       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 19:28:54.534376       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0621 19:28:54.534440       1 shared_informer.go:320] Caches are synced for configmaps
	I0621 19:28:54.536097       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0621 19:28:54.536189       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0621 19:28:54.551784       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0621 19:28:55.355938       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 19:28:56.259875       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 19:28:56.276649       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 19:28:56.324441       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 19:28:56.360684       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 19:28:56.368295       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 19:29:00.158713       1 trace.go:236] Trace[1660025270]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:98d190e1-71f0-4e4b-8bfc-5610e2ed64fc,client:192.168.39.75,api-group:,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.30.2 (linux/amd64) kubernetes/3968350,verb:POST (21-Jun-2024 19:28:59.532) (total time: 626ms):
	Trace[1660025270]: ["Create etcd3" audit-id:98d190e1-71f0-4e4b-8bfc-5610e2ed64fc,key:/events/default/pause-709611.17db1bb14b1d76ab,type:*core.Event,resource:events 625ms (19:28:59.533)
	Trace[1660025270]:  ---"Txn call succeeded" 625ms (19:29:00.158)]
	Trace[1660025270]: [626.523303ms] [626.523303ms] END
	I0621 19:29:07.522711       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 19:29:07.527006       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788] <==
	
	
	==> kube-controller-manager [fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e] <==
	I0621 19:29:07.574170       1 shared_informer.go:320] Caches are synced for GC
	I0621 19:29:07.587693       1 shared_informer.go:320] Caches are synced for job
	I0621 19:29:07.635935       1 shared_informer.go:320] Caches are synced for daemon sets
	I0621 19:29:07.653273       1 shared_informer.go:320] Caches are synced for taint
	I0621 19:29:07.653413       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0621 19:29:07.653533       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-709611"
	I0621 19:29:07.653596       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0621 19:29:07.657677       1 shared_informer.go:320] Caches are synced for attach detach
	I0621 19:29:07.661355       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 19:29:07.665216       1 shared_informer.go:320] Caches are synced for ephemeral
	I0621 19:29:07.686690       1 shared_informer.go:320] Caches are synced for expand
	I0621 19:29:07.686805       1 shared_informer.go:320] Caches are synced for stateful set
	I0621 19:29:07.697793       1 shared_informer.go:320] Caches are synced for PVC protection
	I0621 19:29:07.701174       1 shared_informer.go:320] Caches are synced for persistent volume
	I0621 19:29:07.728122       1 shared_informer.go:320] Caches are synced for disruption
	I0621 19:29:07.730463       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0621 19:29:07.736158       1 shared_informer.go:320] Caches are synced for deployment
	I0621 19:29:07.748684       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0621 19:29:07.748763       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0621 19:29:07.748743       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0621 19:29:07.748995       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 19:29:07.749049       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0621 19:29:08.192702       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 19:29:08.243597       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 19:29:08.243663       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e] <==
	I0621 19:28:37.519698       1 server_linux.go:69] "Using iptables proxy"
	
	
	==> kube-proxy [fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457] <==
	I0621 19:28:55.563035       1 server_linux.go:69] "Using iptables proxy"
	I0621 19:28:55.585111       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.75"]
	I0621 19:28:55.655494       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 19:28:55.655559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 19:28:55.655579       1 server_linux.go:165] "Using iptables Proxier"
	I0621 19:28:55.659852       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 19:28:55.660065       1 server.go:872] "Version info" version="v1.30.2"
	I0621 19:28:55.660088       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:28:55.661564       1 config.go:192] "Starting service config controller"
	I0621 19:28:55.661685       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 19:28:55.661751       1 config.go:101] "Starting endpoint slice config controller"
	I0621 19:28:55.661759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 19:28:55.662392       1 config.go:319] "Starting node config controller"
	I0621 19:28:55.662421       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 19:28:55.762589       1 shared_informer.go:320] Caches are synced for node config
	I0621 19:28:55.762721       1 shared_informer.go:320] Caches are synced for service config
	I0621 19:28:55.762733       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf] <==
	
	
	==> kube-scheduler [f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58] <==
	I0621 19:28:53.022710       1 serving.go:380] Generated self-signed cert in-memory
	W0621 19:28:54.355421       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0621 19:28:54.355526       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 19:28:54.355545       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0621 19:28:54.355556       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0621 19:28:54.429283       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0621 19:28:54.429413       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:28:54.431449       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0621 19:28:54.431877       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0621 19:28:54.435992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 19:28:54.431964       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0621 19:28:54.537315       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.190637    3661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7a3ddf226b2d6c1a4a645ae49425b2d-kubeconfig\") pod \"kube-scheduler-pause-709611\" (UID: \"d7a3ddf226b2d6c1a4a645ae49425b2d\") " pod="kube-system/kube-scheduler-pause-709611"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: E0621 19:28:51.191267    3661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-709611?timeout=10s\": dial tcp 192.168.39.75:8443: connect: connection refused" interval="400ms"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.291062    3661 kubelet_node_status.go:73] "Attempting to register node" node="pause-709611"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: E0621 19:28:51.292214    3661 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.75:8443: connect: connection refused" node="pause-709611"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.439501    3661 scope.go:117] "RemoveContainer" containerID="bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.439853    3661 scope.go:117] "RemoveContainer" containerID="8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.441150    3661 scope.go:117] "RemoveContainer" containerID="d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.441496    3661 scope.go:117] "RemoveContainer" containerID="5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: E0621 19:28:51.597934    3661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-709611?timeout=10s\": dial tcp 192.168.39.75:8443: connect: connection refused" interval="800ms"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.694774    3661 kubelet_node_status.go:73] "Attempting to register node" node="pause-709611"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: E0621 19:28:51.695553    3661 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.75:8443: connect: connection refused" node="pause-709611"
	Jun 21 19:28:52 pause-709611 kubelet[3661]: I0621 19:28:52.497297    3661 kubelet_node_status.go:73] "Attempting to register node" node="pause-709611"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.471411    3661 kubelet_node_status.go:112] "Node was previously registered" node="pause-709611"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.471538    3661 kubelet_node_status.go:76] "Successfully registered node" node="pause-709611"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.473320    3661 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.474438    3661 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.967847    3661 apiserver.go:52] "Watching apiserver"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.971949    3661 topology_manager.go:215] "Topology Admit Handler" podUID="f4afccdb-9436-419e-812f-5d1b8a9eba53" podNamespace="kube-system" podName="kube-proxy-5gg8h"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.972367    3661 topology_manager.go:215] "Topology Admit Handler" podUID="89899309-4a41-4043-b917-9d05815d0a40" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s4tzq"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.985354    3661 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 21 19:28:55 pause-709611 kubelet[3661]: I0621 19:28:55.001596    3661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4afccdb-9436-419e-812f-5d1b8a9eba53-xtables-lock\") pod \"kube-proxy-5gg8h\" (UID: \"f4afccdb-9436-419e-812f-5d1b8a9eba53\") " pod="kube-system/kube-proxy-5gg8h"
	Jun 21 19:28:55 pause-709611 kubelet[3661]: I0621 19:28:55.001810    3661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4afccdb-9436-419e-812f-5d1b8a9eba53-lib-modules\") pod \"kube-proxy-5gg8h\" (UID: \"f4afccdb-9436-419e-812f-5d1b8a9eba53\") " pod="kube-system/kube-proxy-5gg8h"
	Jun 21 19:28:55 pause-709611 kubelet[3661]: I0621 19:28:55.273748    3661 scope.go:117] "RemoveContainer" containerID="a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3"
	Jun 21 19:28:55 pause-709611 kubelet[3661]: I0621 19:28:55.274049    3661 scope.go:117] "RemoveContainer" containerID="04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e"
	Jun 21 19:28:59 pause-709611 kubelet[3661]: I0621 19:28:59.993314    3661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-709611 -n pause-709611
helpers_test.go:261: (dbg) Run:  kubectl --context pause-709611 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-709611 -n pause-709611
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-709611 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-709611 logs -n 25: (1.458577898s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-843358             | cert-expiration-843358    | jenkins | v1.33.1 | 21 Jun 24 19:24 UTC | 21 Jun 24 19:25 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-313770             | running-upgrade-313770    | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:26 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:25 UTC |
	| start   | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:26 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-352820 ssh cat     | force-systemd-flag-352820 | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:25 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-352820          | force-systemd-flag-352820 | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:25 UTC |
	| start   | -p cert-options-912751                | cert-options-912751       | jenkins | v1.33.1 | 21 Jun 24 19:25 UTC | 21 Jun 24 19:26 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-262372 sudo           | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	| start   | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-313770             | running-upgrade-313770    | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	| start   | -p pause-709611 --memory=2048         | pause-709611              | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:28 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-912751 ssh               | cert-options-912751       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-912751 -- sudo        | cert-options-912751       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-912751                | cert-options-912751       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	| start   | -p kubernetes-upgrade-371786          | kubernetes-upgrade-371786 | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-262372 sudo           | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-262372                | NoKubernetes-262372       | jenkins | v1.33.1 | 21 Jun 24 19:26 UTC | 21 Jun 24 19:26 UTC |
	| start   | -p stopped-upgrade-693942             | minikube                  | jenkins | v1.26.0 | 21 Jun 24 19:27 UTC | 21 Jun 24 19:28 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p pause-709611                       | pause-709611              | jenkins | v1.33.1 | 21 Jun 24 19:28 UTC | 21 Jun 24 19:29 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-693942 stop           | minikube                  | jenkins | v1.26.0 | 21 Jun 24 19:28 UTC | 21 Jun 24 19:28 UTC |
	| start   | -p stopped-upgrade-693942             | stopped-upgrade-693942    | jenkins | v1.33.1 | 21 Jun 24 19:28 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-843358             | cert-expiration-843358    | jenkins | v1.33.1 | 21 Jun 24 19:28 UTC | 21 Jun 24 19:29 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-843358             | cert-expiration-843358    | jenkins | v1.33.1 | 21 Jun 24 19:29 UTC | 21 Jun 24 19:29 UTC |
	| start   | -p auto-313995 --memory=3072          | auto-313995               | jenkins | v1.33.1 | 21 Jun 24 19:29 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 19:29:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 19:29:13.695939   60543 out.go:291] Setting OutFile to fd 1 ...
	I0621 19:29:13.696199   60543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:29:13.696207   60543 out.go:304] Setting ErrFile to fd 2...
	I0621 19:29:13.696211   60543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:29:13.696379   60543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 19:29:13.696909   60543 out.go:298] Setting JSON to false
	I0621 19:29:13.697896   60543 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7852,"bootTime":1718990302,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 19:29:13.697953   60543 start.go:139] virtualization: kvm guest
	I0621 19:29:13.700393   60543 out.go:177] * [auto-313995] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 19:29:13.701877   60543 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 19:29:13.701877   60543 notify.go:220] Checking for updates...
	I0621 19:29:13.703187   60543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 19:29:13.704630   60543 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 19:29:13.706029   60543 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 19:29:13.707397   60543 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 19:29:13.708528   60543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 19:29:13.710435   60543 config.go:182] Loaded profile config "kubernetes-upgrade-371786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0621 19:29:13.710614   60543 config.go:182] Loaded profile config "pause-709611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:29:13.710734   60543 config.go:182] Loaded profile config "stopped-upgrade-693942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0621 19:29:13.710853   60543 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 19:29:13.754216   60543 out.go:177] * Using the kvm2 driver based on user configuration
	I0621 19:29:13.755460   60543 start.go:297] selected driver: kvm2
	I0621 19:29:13.755477   60543 start.go:901] validating driver "kvm2" against <nil>
	I0621 19:29:13.755486   60543 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 19:29:13.756258   60543 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:29:13.756320   60543 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 19:29:13.772713   60543 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 19:29:13.772774   60543 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 19:29:13.772993   60543 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0621 19:29:13.773059   60543 cni.go:84] Creating CNI manager for ""
	I0621 19:29:13.773075   60543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 19:29:13.773087   60543 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0621 19:29:13.773157   60543 start.go:340] cluster config:
	{Name:auto-313995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:auto-313995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 19:29:13.773262   60543 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 19:29:13.775957   60543 out.go:177] * Starting "auto-313995" primary control-plane node in "auto-313995" cluster
	I0621 19:29:13.777112   60543 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 19:29:13.777154   60543 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 19:29:13.777166   60543 cache.go:56] Caching tarball of preloaded images
	I0621 19:29:13.777259   60543 preload.go:173] Found /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0621 19:29:13.777273   60543 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0621 19:29:13.777361   60543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/auto-313995/config.json ...
	I0621 19:29:13.777379   60543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/auto-313995/config.json: {Name:mkc54a2a989ad9625979ec901377d998419a191d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:29:13.777524   60543 start.go:360] acquireMachinesLock for auto-313995: {Name:mkdb5ead19d46168ac3b04a7a163113221efea18 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0621 19:29:13.777556   60543 start.go:364] duration metric: took 17.328µs to acquireMachinesLock for "auto-313995"
	I0621 19:29:13.777579   60543 start.go:93] Provisioning new machine with config: &{Name:auto-313995 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:auto-313995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 19:29:13.777660   60543 start.go:125] createHost starting for "" (driver="kvm2")
	I0621 19:29:13.144411   60080 api_server.go:279] https://192.168.72.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0621 19:29:13.144436   60080 api_server.go:103] status: https://192.168.72.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0621 19:29:13.144452   60080 api_server.go:253] Checking apiserver healthz at https://192.168.72.185:8443/healthz ...
	I0621 19:29:13.184146   60080 api_server.go:279] https://192.168.72.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0621 19:29:13.184172   60080 api_server.go:103] status: https://192.168.72.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0621 19:29:13.427553   60080 api_server.go:253] Checking apiserver healthz at https://192.168.72.185:8443/healthz ...
	I0621 19:29:13.434699   60080 api_server.go:279] https://192.168.72.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0621 19:29:13.434730   60080 api_server.go:103] status: https://192.168.72.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0621 19:29:13.928012   60080 api_server.go:253] Checking apiserver healthz at https://192.168.72.185:8443/healthz ...
	I0621 19:29:13.934187   60080 api_server.go:279] https://192.168.72.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0621 19:29:13.934218   60080 api_server.go:103] status: https://192.168.72.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0621 19:29:14.427889   60080 api_server.go:253] Checking apiserver healthz at https://192.168.72.185:8443/healthz ...
	I0621 19:29:14.435189   60080 api_server.go:279] https://192.168.72.185:8443/healthz returned 200:
	ok
	I0621 19:29:14.444585   60080 api_server.go:141] control plane version: v1.24.1
	I0621 19:29:14.444619   60080 api_server.go:131] duration metric: took 6.517696057s to wait for apiserver health ...
	I0621 19:29:14.444630   60080 cni.go:84] Creating CNI manager for ""
	I0621 19:29:14.444640   60080 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 19:29:14.446952   60080 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0621 19:29:14.448313   60080 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0621 19:29:14.460055   60080 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0621 19:29:14.479848   60080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0621 19:29:14.488179   60080 system_pods.go:59] 5 kube-system pods found
	I0621 19:29:14.488213   60080 system_pods.go:61] "etcd-stopped-upgrade-693942" [7cfcbf0b-ab90-45c9-b745-b258b888791d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0621 19:29:14.488219   60080 system_pods.go:61] "kube-apiserver-stopped-upgrade-693942" [5b378f7c-6b93-4da8-a6ce-fcb236ce7d58] Pending
	I0621 19:29:14.488230   60080 system_pods.go:61] "kube-controller-manager-stopped-upgrade-693942" [8047661b-a12b-4919-af74-202b33c23eb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0621 19:29:14.488239   60080 system_pods.go:61] "kube-scheduler-stopped-upgrade-693942" [2a4f7267-5fb5-438c-8aa7-32f4945b32ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0621 19:29:14.488246   60080 system_pods.go:61] "storage-provisioner" [2550c21c-51ec-4969-88dc-c1c954b296e9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0621 19:29:14.488255   60080 system_pods.go:74] duration metric: took 8.384555ms to wait for pod list to return data ...
	I0621 19:29:14.488270   60080 node_conditions.go:102] verifying NodePressure condition ...
	I0621 19:29:14.491694   60080 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0621 19:29:14.491724   60080 node_conditions.go:123] node cpu capacity is 2
	I0621 19:29:14.491735   60080 node_conditions.go:105] duration metric: took 3.4604ms to run NodePressure ...
	I0621 19:29:14.491755   60080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0621 19:29:14.693744   60080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0621 19:29:14.705517   60080 ops.go:34] apiserver oom_adj: -16
	I0621 19:29:14.705537   60080 kubeadm.go:591] duration metric: took 10.127080539s to restartPrimaryControlPlane
	I0621 19:29:14.705548   60080 kubeadm.go:393] duration metric: took 10.195808316s to StartCluster
	I0621 19:29:14.705567   60080 settings.go:142] acquiring lock: {Name:mkdbb660cad4d8fb446e5c2ca4439ea3326e9592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:29:14.705648   60080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 19:29:14.706784   60080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19112-8111/kubeconfig: {Name:mk87038194ab41f67dd50d90b017d32a83c3da4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0621 19:29:14.707043   60080 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.185 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0621 19:29:14.707124   60080 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0621 19:29:14.707221   60080 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-693942"
	I0621 19:29:14.707250   60080 config.go:182] Loaded profile config "stopped-upgrade-693942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0621 19:29:14.707258   60080 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-693942"
	I0621 19:29:14.707255   60080 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-693942"
	I0621 19:29:14.707298   60080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-693942"
	W0621 19:29:14.707309   60080 addons.go:243] addon storage-provisioner should already be in state true
	I0621 19:29:14.707340   60080 host.go:66] Checking if "stopped-upgrade-693942" exists ...
	I0621 19:29:14.707706   60080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:29:14.707736   60080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:29:14.707710   60080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:29:14.707870   60080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:29:14.708628   60080 out.go:177] * Verifying Kubernetes components...
	I0621 19:29:14.709791   60080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0621 19:29:14.723641   60080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45443
	I0621 19:29:14.724094   60080 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:29:14.724626   60080 main.go:141] libmachine: Using API Version  1
	I0621 19:29:14.724660   60080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:29:14.725075   60080 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:29:14.725667   60080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:29:14.725712   60080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:29:14.726187   60080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0621 19:29:14.726534   60080 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:29:14.727054   60080 main.go:141] libmachine: Using API Version  1
	I0621 19:29:14.727079   60080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:29:14.727462   60080 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:29:14.727634   60080 main.go:141] libmachine: (stopped-upgrade-693942) Calling .GetState
	I0621 19:29:14.730701   60080 kapi.go:59] client config for stopped-upgrade-693942: &rest.Config{Host:"https://192.168.72.185:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/stopped-upgrade-693942/client.crt", KeyFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/profiles/stopped-upgrade-693942/client.key", CAFile:"/home/jenkins/minikube-integration/19112-8111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf98a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0621 19:29:14.731042   60080 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-693942"
	W0621 19:29:14.731060   60080 addons.go:243] addon default-storageclass should already be in state true
	I0621 19:29:14.731087   60080 host.go:66] Checking if "stopped-upgrade-693942" exists ...
	I0621 19:29:14.731443   60080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:29:14.731469   60080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:29:14.743934   60080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35573
	I0621 19:29:14.744369   60080 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:29:14.744927   60080 main.go:141] libmachine: Using API Version  1
	I0621 19:29:14.744948   60080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:29:14.745361   60080 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:29:14.745553   60080 main.go:141] libmachine: (stopped-upgrade-693942) Calling .GetState
	I0621 19:29:14.747595   60080 main.go:141] libmachine: (stopped-upgrade-693942) Calling .DriverName
	I0621 19:29:14.749784   60080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.382960001Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998156382926149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65afde1a-98d2-4041-b2ed-3df742883ac1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.383679994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb2b0672-fffd-4ea8-aa2a-f653eed5e5bf name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.383777149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb2b0672-fffd-4ea8-aa2a-f653eed5e5bf name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.384128972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b,PodSandboxId:9d997d0b0226ea5bd74be40bb5858c442c955a811c1f2c07aaaa944be4d5ac5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718998135309913283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457,PodSandboxId:5ff8679648f946ddb09902c8738966bbd9277f4f27b66de776e6e6676174d937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718998135302517585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3,PodSandboxId:6064702d2d887d6da40b4b5426e2300f5fe221fd546c893e6ded0b2ea0dcd24c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998131486871343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b
59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58,PodSandboxId:91d24172b47844ef1594faf9c6d935a22720a7e58b5dcc65671e1630e307f4bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998131458177079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae4942
5b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9,PodSandboxId:a994b2ee75d6df6fb3e52852c36d7799a01ead686cba358c365f328cb9a22ffd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998131491226677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernete
s.container.hash: 73210495,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e,PodSandboxId:6a748f289957adab0a3817216368f70baa12cfce42ac13d72c8460b27a11cc59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998131470819049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3,PodSandboxId:4d007334719fd50361661fd8ccb1ce081a0a1b1a7d70b93e6262a7f982e980dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718998117173089929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c
367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e,PodSandboxId:b4cc3efa63d4546090c14e773a3ed5b377ab3e34e18be7296e9095df96a60dbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718998116373264542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf,PodSandboxId:dc00428f46239543da292e5ed36fc55c7f0e97bb79602bce648cf0bac08f7afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998116255151258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43,PodSandboxId:0c77fb3b14b7a14822094758e0da022ee9ade4e9c50a4072251e8613966010ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998116148812922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 73210495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6,PodSandboxId:a429d189b269b6ddc79916f44e9c624e0a7eac9aec2de5de30c726f5d6763522,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998116120768984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 484e9c7a7b59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788,PodSandboxId:50d831970405c5c958be73452b99daa51a0ee3695ea1a0783fe9e0427e104887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998116029023213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb2b0672-fffd-4ea8-aa2a-f653eed5e5bf name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.426107310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dac57c84-21d1-41a8-99ae-7ebec85039f9 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.426182156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dac57c84-21d1-41a8-99ae-7ebec85039f9 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.427248391Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e97d207-61d6-43bc-8b0f-75b9011b28e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.427693652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998156427669315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e97d207-61d6-43bc-8b0f-75b9011b28e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.428146842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8687628-b7f6-4963-82e9-d3a12dc59210 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.428224440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8687628-b7f6-4963-82e9-d3a12dc59210 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.428882263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b,PodSandboxId:9d997d0b0226ea5bd74be40bb5858c442c955a811c1f2c07aaaa944be4d5ac5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718998135309913283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457,PodSandboxId:5ff8679648f946ddb09902c8738966bbd9277f4f27b66de776e6e6676174d937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718998135302517585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3,PodSandboxId:6064702d2d887d6da40b4b5426e2300f5fe221fd546c893e6ded0b2ea0dcd24c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998131486871343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b
59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58,PodSandboxId:91d24172b47844ef1594faf9c6d935a22720a7e58b5dcc65671e1630e307f4bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998131458177079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae4942
5b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9,PodSandboxId:a994b2ee75d6df6fb3e52852c36d7799a01ead686cba358c365f328cb9a22ffd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998131491226677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernete
s.container.hash: 73210495,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e,PodSandboxId:6a748f289957adab0a3817216368f70baa12cfce42ac13d72c8460b27a11cc59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998131470819049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3,PodSandboxId:4d007334719fd50361661fd8ccb1ce081a0a1b1a7d70b93e6262a7f982e980dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718998117173089929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c
367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e,PodSandboxId:b4cc3efa63d4546090c14e773a3ed5b377ab3e34e18be7296e9095df96a60dbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718998116373264542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf,PodSandboxId:dc00428f46239543da292e5ed36fc55c7f0e97bb79602bce648cf0bac08f7afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998116255151258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43,PodSandboxId:0c77fb3b14b7a14822094758e0da022ee9ade4e9c50a4072251e8613966010ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998116148812922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 73210495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6,PodSandboxId:a429d189b269b6ddc79916f44e9c624e0a7eac9aec2de5de30c726f5d6763522,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998116120768984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 484e9c7a7b59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788,PodSandboxId:50d831970405c5c958be73452b99daa51a0ee3695ea1a0783fe9e0427e104887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998116029023213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8687628-b7f6-4963-82e9-d3a12dc59210 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.478439709Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3afd4fd-5157-4457-a95b-d64422c1d520 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.478526053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3afd4fd-5157-4457-a95b-d64422c1d520 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.480037089Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f61f95e-1779-4fd2-9b7b-d279b0cc9447 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.480594435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998156480553259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f61f95e-1779-4fd2-9b7b-d279b0cc9447 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.481423712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ace4f10-4e36-4c5c-ab17-5346b99dee16 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.481495546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ace4f10-4e36-4c5c-ab17-5346b99dee16 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.481847916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b,PodSandboxId:9d997d0b0226ea5bd74be40bb5858c442c955a811c1f2c07aaaa944be4d5ac5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718998135309913283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457,PodSandboxId:5ff8679648f946ddb09902c8738966bbd9277f4f27b66de776e6e6676174d937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718998135302517585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3,PodSandboxId:6064702d2d887d6da40b4b5426e2300f5fe221fd546c893e6ded0b2ea0dcd24c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998131486871343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b
59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58,PodSandboxId:91d24172b47844ef1594faf9c6d935a22720a7e58b5dcc65671e1630e307f4bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998131458177079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae4942
5b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9,PodSandboxId:a994b2ee75d6df6fb3e52852c36d7799a01ead686cba358c365f328cb9a22ffd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998131491226677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernete
s.container.hash: 73210495,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e,PodSandboxId:6a748f289957adab0a3817216368f70baa12cfce42ac13d72c8460b27a11cc59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998131470819049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3,PodSandboxId:4d007334719fd50361661fd8ccb1ce081a0a1b1a7d70b93e6262a7f982e980dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718998117173089929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c
367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e,PodSandboxId:b4cc3efa63d4546090c14e773a3ed5b377ab3e34e18be7296e9095df96a60dbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718998116373264542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf,PodSandboxId:dc00428f46239543da292e5ed36fc55c7f0e97bb79602bce648cf0bac08f7afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998116255151258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43,PodSandboxId:0c77fb3b14b7a14822094758e0da022ee9ade4e9c50a4072251e8613966010ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998116148812922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 73210495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6,PodSandboxId:a429d189b269b6ddc79916f44e9c624e0a7eac9aec2de5de30c726f5d6763522,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998116120768984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 484e9c7a7b59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788,PodSandboxId:50d831970405c5c958be73452b99daa51a0ee3695ea1a0783fe9e0427e104887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998116029023213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ace4f10-4e36-4c5c-ab17-5346b99dee16 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.535725467Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f53b830e-d996-4d5d-bdeb-da5ea34811b1 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.535834375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f53b830e-d996-4d5d-bdeb-da5ea34811b1 name=/runtime.v1.RuntimeService/Version
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.538855922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1088c4ca-a5e5-4028-ac4a-ce345cda9ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.539967697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718998156539927591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1088c4ca-a5e5-4028-ac4a-ce345cda9ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.541291884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e02f5f58-c1f3-46ab-9f7a-222563535d28 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.541409385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e02f5f58-c1f3-46ab-9f7a-222563535d28 name=/runtime.v1.RuntimeService/ListContainers
	Jun 21 19:29:16 pause-709611 crio[2965]: time="2024-06-21 19:29:16.541914449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b,PodSandboxId:9d997d0b0226ea5bd74be40bb5858c442c955a811c1f2c07aaaa944be4d5ac5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718998135309913283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457,PodSandboxId:5ff8679648f946ddb09902c8738966bbd9277f4f27b66de776e6e6676174d937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1718998135302517585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3,PodSandboxId:6064702d2d887d6da40b4b5426e2300f5fe221fd546c893e6ded0b2ea0dcd24c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1718998131486871343,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 484e9c7a7b
59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58,PodSandboxId:91d24172b47844ef1594faf9c6d935a22720a7e58b5dcc65671e1630e307f4bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1718998131458177079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae4942
5b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9,PodSandboxId:a994b2ee75d6df6fb3e52852c36d7799a01ead686cba358c365f328cb9a22ffd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718998131491226677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernete
s.container.hash: 73210495,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e,PodSandboxId:6a748f289957adab0a3817216368f70baa12cfce42ac13d72c8460b27a11cc59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1718998131470819049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3,PodSandboxId:4d007334719fd50361661fd8ccb1ce081a0a1b1a7d70b93e6262a7f982e980dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718998117173089929,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s4tzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89899309-4a41-4043-b917-9d05815d0a40,},Annotations:map[string]string{io.kubernetes.container.hash: 7206c
367,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e,PodSandboxId:b4cc3efa63d4546090c14e773a3ed5b377ab3e34e18be7296e9095df96a60dbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1718998116373264542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-5gg8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4afccdb-9436-419e-812f-5d1b8a9eba53,},Annotations:map[string]string{io.kubernetes.container.hash: b24fb4ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf,PodSandboxId:dc00428f46239543da292e5ed36fc55c7f0e97bb79602bce648cf0bac08f7afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1718998116255151258,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a3ddf226b2d6c1a4a645ae49425b2d,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43,PodSandboxId:0c77fb3b14b7a14822094758e0da022ee9ade4e9c50a4072251e8613966010ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718998116148812922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-709611,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 195415512a07a028b784af173ab67f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 73210495,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6,PodSandboxId:a429d189b269b6ddc79916f44e9c624e0a7eac9aec2de5de30c726f5d6763522,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1718998116120768984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 484e9c7a7b59f963f1910971c476884a,},Annotations:map[string]string{io.kubernetes.container.hash: da7e579a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788,PodSandboxId:50d831970405c5c958be73452b99daa51a0ee3695ea1a0783fe9e0427e104887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1718998116029023213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-709611,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba8d208ac65513db97b663391a68f6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e02f5f58-c1f3-46ab-9f7a-222563535d28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	201446faeb719       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago      Running             coredns                   2                   9d997d0b0226e       coredns-7db6d8ff4d-s4tzq
	fad8c5ff81c08       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   21 seconds ago      Running             kube-proxy                2                   5ff8679648f94       kube-proxy-5gg8h
	9bd43fc1dab57       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago      Running             etcd                      2                   a994b2ee75d6d       etcd-pause-709611
	f69e4f10489b5       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   25 seconds ago      Running             kube-apiserver            2                   6064702d2d887       kube-apiserver-pause-709611
	fb757a14e8f33       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   25 seconds ago      Running             kube-controller-manager   2                   6a748f289957a       kube-controller-manager-pause-709611
	f934a8c3fef0a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   25 seconds ago      Running             kube-scheduler            2                   91d24172b4784       kube-scheduler-pause-709611
	a30403507079b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   39 seconds ago      Exited              coredns                   1                   4d007334719fd       coredns-7db6d8ff4d-s4tzq
	04e1ab86d0e48       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   40 seconds ago      Exited              kube-proxy                1                   b4cc3efa63d45       kube-proxy-5gg8h
	d01011176eab3       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   40 seconds ago      Exited              kube-scheduler            1                   dc00428f46239       kube-scheduler-pause-709611
	bc193ffb133fd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   40 seconds ago      Exited              etcd                      1                   0c77fb3b14b7a       etcd-pause-709611
	8baf767038d56       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   40 seconds ago      Exited              kube-apiserver            1                   a429d189b269b       kube-apiserver-pause-709611
	5cfc092ef2389       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   40 seconds ago      Exited              kube-controller-manager   1                   50d831970405c       kube-controller-manager-pause-709611
	
	
	==> coredns [201446faeb7196ba6b136e50c92542d671158faeb4f0d118dbd76c4b93b2a07b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40363 - 37112 "HINFO IN 2689991368494369451.6408760700095078131. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013030005s
	
	
	==> coredns [a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:52441 - 1049 "HINFO IN 6068307658687501304.8304913484256281873. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010480563s
	
	
	==> describe nodes <==
	Name:               pause-709611
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-709611
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a0d377c34faa85740cf2404ea12566198300600
	                    minikube.k8s.io/name=pause-709611
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_21T19_27_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Jun 2024 19:27:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-709611
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Jun 2024 19:29:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Jun 2024 19:28:54 +0000   Fri, 21 Jun 2024 19:27:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Jun 2024 19:28:54 +0000   Fri, 21 Jun 2024 19:27:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Jun 2024 19:28:54 +0000   Fri, 21 Jun 2024 19:27:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Jun 2024 19:28:54 +0000   Fri, 21 Jun 2024 19:27:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.75
	  Hostname:    pause-709611
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 13635eec7a2e4d85ade6317c56bfbfae
	  System UUID:                13635eec-7a2e-4d85-ade6-317c56bfbfae
	  Boot ID:                    2beaf04d-2f9a-4e10-99b9-552f6bf9ae69
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-s4tzq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     89s
	  kube-system                 etcd-pause-709611                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kube-apiserver-pause-709611             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-controller-manager-pause-709611    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-5gg8h                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-pause-709611             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 86s                  kube-proxy       
	  Normal  Starting                 21s                  kube-proxy       
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s (x8 over 109s)  kubelet          Node pause-709611 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 109s)  kubelet          Node pause-709611 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x7 over 109s)  kubelet          Node pause-709611 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node pause-709611 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node pause-709611 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node pause-709611 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeReady                102s                 kubelet          Node pause-709611 status is now: NodeReady
	  Normal  RegisteredNode           90s                  node-controller  Node pause-709611 event: Registered Node pause-709611 in Controller
	  Normal  Starting                 26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)    kubelet          Node pause-709611 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)    kubelet          Node pause-709611 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)    kubelet          Node pause-709611 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                   node-controller  Node pause-709611 event: Registered Node pause-709611 in Controller
	
	
	==> dmesg <==
	[  +6.944707] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.064221] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061192] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.212047] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.138377] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.284987] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.181155] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.492451] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.059229] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.996926] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.095634] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.520645] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.738208] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[ +12.033391] kauditd_printk_skb: 89 callbacks suppressed
	[Jun21 19:28] systemd-fstab-generator[2633]: Ignoring "noauto" option for root device
	[  +0.240394] systemd-fstab-generator[2716]: Ignoring "noauto" option for root device
	[  +0.307149] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +0.205191] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[  +0.532470] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[ +10.902390] systemd-fstab-generator[3211]: Ignoring "noauto" option for root device
	[  +0.089151] kauditd_printk_skb: 173 callbacks suppressed
	[  +2.239814] systemd-fstab-generator[3654]: Ignoring "noauto" option for root device
	[  +4.664391] kauditd_printk_skb: 109 callbacks suppressed
	[Jun21 19:29] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.639988] systemd-fstab-generator[4095]: Ignoring "noauto" option for root device
	
	
	==> etcd [9bd43fc1dab57e167c5fee1ee5154bc6e49a8497eba7cc582fb77f7eec0f1ea9] <==
	{"level":"info","ts":"2024-06-21T19:28:52.333663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-21T19:28:52.335216Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-21T19:28:52.337779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.75:2379"}
	{"level":"info","ts":"2024-06-21T19:28:52.344041Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-21T19:28:52.344656Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-06-21T19:28:59.521123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.681591ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15851140427028917395 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb145cf0d06\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb145cf0d06\" value_size:544 lease:6627768390174141585 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:28:59.521461Z","caller":"traceutil/trace.go:171","msg":"trace[959245181] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"383.895736ms","start":"2024-06-21T19:28:59.137543Z","end":"2024-06-21T19:28:59.521439Z","steps":["trace[959245181] 'process raft request'  (duration: 127.586901ms)","trace[959245181] 'compare'  (duration: 255.536522ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:28:59.521681Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:28:59.137527Z","time spent":"384.120232ms","remote":"127.0.0.1:44956","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":616,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb145cf0d06\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb145cf0d06\" value_size:544 lease:6627768390174141585 >> failure:<>"}
	{"level":"warn","ts":"2024-06-21T19:29:00.157395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.149942ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15851140427028917399 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb14b1d76ab\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb14b1d76ab\" value_size:594 lease:6627768390174141585 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:29:00.157659Z","caller":"traceutil/trace.go:171","msg":"trace[1645743485] linearizableReadLoop","detail":"{readStateIndex:505; appliedIndex:504; }","duration":"231.867622ms","start":"2024-06-21T19:28:59.925774Z","end":"2024-06-21T19:29:00.157642Z","steps":["trace[1645743485] 'read index received'  (duration: 38.34µs)","trace[1645743485] 'applied index is now lower than readState.Index'  (duration: 231.826975ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-21T19:29:00.157672Z","caller":"traceutil/trace.go:171","msg":"trace[2087919787] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"624.078725ms","start":"2024-06-21T19:28:59.533571Z","end":"2024-06-21T19:29:00.15765Z","steps":["trace[2087919787] 'process raft request'  (duration: 247.628289ms)","trace[2087919787] 'compare'  (duration: 376.011319ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:29:00.157858Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.119607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" ","response":"range_response_count:1 size:5020"}
	{"level":"info","ts":"2024-06-21T19:29:00.157898Z","caller":"traceutil/trace.go:171","msg":"trace[1920827187] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq; range_end:; response_count:1; response_revision:465; }","duration":"162.179917ms","start":"2024-06-21T19:28:59.995711Z","end":"2024-06-21T19:29:00.157891Z","steps":["trace[1920827187] 'agreement among raft nodes before linearized reading'  (duration: 162.126734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:29:00.15785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:28:59.533559Z","time spent":"624.252919ms","remote":"127.0.0.1:44956","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":666,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb14b1d76ab\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb14b1d76ab\" value_size:594 lease:6627768390174141585 >> failure:<>"}
	{"level":"warn","ts":"2024-06-21T19:29:00.157785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.039154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" ","response":"range_response_count:1 size:5020"}
	{"level":"info","ts":"2024-06-21T19:29:00.159178Z","caller":"traceutil/trace.go:171","msg":"trace[1285980518] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq; range_end:; response_count:1; response_revision:465; }","duration":"233.456326ms","start":"2024-06-21T19:28:59.925709Z","end":"2024-06-21T19:29:00.159165Z","steps":["trace[1285980518] 'agreement among raft nodes before linearized reading'  (duration: 231.989815ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:29:00.572024Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.242757ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15851140427028917403 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb14b1d957b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb14b1d957b\" value_size:592 lease:6627768390174141585 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-21T19:29:00.572205Z","caller":"traceutil/trace.go:171","msg":"trace[1699236452] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:505; }","duration":"145.766115ms","start":"2024-06-21T19:29:00.426429Z","end":"2024-06-21T19:29:00.572195Z","steps":["trace[1699236452] 'read index received'  (duration: 145.486662ms)","trace[1699236452] 'applied index is now lower than readState.Index'  (duration: 278.894µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:29:00.572335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.939853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" ","response":"range_response_count:1 size:4842"}
	{"level":"info","ts":"2024-06-21T19:29:00.572383Z","caller":"traceutil/trace.go:171","msg":"trace[1400233159] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq; range_end:; response_count:1; response_revision:467; }","duration":"145.994272ms","start":"2024-06-21T19:29:00.426381Z","end":"2024-06-21T19:29:00.572375Z","steps":["trace[1400233159] 'agreement among raft nodes before linearized reading'  (duration: 145.897611ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-21T19:29:00.572487Z","caller":"traceutil/trace.go:171","msg":"trace[1560048698] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"407.088906ms","start":"2024-06-21T19:29:00.165393Z","end":"2024-06-21T19:29:00.572482Z","steps":["trace[1560048698] 'process raft request'  (duration: 406.754254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-21T19:29:00.572726Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:29:00.16538Z","time spent":"407.133153ms","remote":"127.0.0.1:45078","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4827,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" mod_revision:459 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" value_size:4768 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-s4tzq\" > >"}
	{"level":"info","ts":"2024-06-21T19:29:00.572809Z","caller":"traceutil/trace.go:171","msg":"trace[1712061501] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"408.662061ms","start":"2024-06-21T19:29:00.16414Z","end":"2024-06-21T19:29:00.572802Z","steps":["trace[1712061501] 'process raft request'  (duration: 126.604039ms)","trace[1712061501] 'compare'  (duration: 281.15042ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-21T19:29:00.572854Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-21T19:29:00.164127Z","time spent":"408.710236ms","remote":"127.0.0.1:44956","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-709611.17db1bb14b1d957b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-709611.17db1bb14b1d957b\" value_size:592 lease:6627768390174141585 >> failure:<>"}
	{"level":"info","ts":"2024-06-21T19:29:00.842935Z","caller":"traceutil/trace.go:171","msg":"trace[1434030198] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"169.503602ms","start":"2024-06-21T19:29:00.673408Z","end":"2024-06-21T19:29:00.842912Z","steps":["trace[1434030198] 'process raft request'  (duration: 76.575581ms)","trace[1434030198] 'compare'  (duration: 92.808797ms)"],"step_count":2}
	
	
	==> etcd [bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43] <==
	{"level":"info","ts":"2024-06-21T19:28:36.975407Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"66.029033ms"}
	{"level":"info","ts":"2024-06-21T19:28:37.005464Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-21T19:28:37.055905Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"63c722bb74796bf1","local-member-id":"b968d2e8d13dbfa","commit-index":469}
	{"level":"info","ts":"2024-06-21T19:28:37.056019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b968d2e8d13dbfa switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-21T19:28:37.056052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b968d2e8d13dbfa became follower at term 2"}
	{"level":"info","ts":"2024-06-21T19:28:37.056082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b968d2e8d13dbfa [peers: [], term: 2, commit: 469, applied: 0, lastindex: 469, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-21T19:28:37.059724Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-06-21T19:28:37.123162Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":446}
	{"level":"info","ts":"2024-06-21T19:28:37.130251Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-06-21T19:28:37.142744Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b968d2e8d13dbfa","timeout":"7s"}
	{"level":"info","ts":"2024-06-21T19:28:37.148801Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b968d2e8d13dbfa"}
	{"level":"info","ts":"2024-06-21T19:28:37.149174Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b968d2e8d13dbfa","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-06-21T19:28:37.158873Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-21T19:28:37.159704Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T19:28:37.16124Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T19:28:37.161258Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-21T19:28:37.161901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b968d2e8d13dbfa switched to configuration voters=(835010011998706682)"}
	{"level":"info","ts":"2024-06-21T19:28:37.162492Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"63c722bb74796bf1","local-member-id":"b968d2e8d13dbfa","added-peer-id":"b968d2e8d13dbfa","added-peer-peer-urls":["https://192.168.39.75:2380"]}
	{"level":"info","ts":"2024-06-21T19:28:37.162242Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-21T19:28:37.162791Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"63c722bb74796bf1","local-member-id":"b968d2e8d13dbfa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-21T19:28:37.162958Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b968d2e8d13dbfa","initial-advertise-peer-urls":["https://192.168.39.75:2380"],"listen-peer-urls":["https://192.168.39.75:2380"],"advertise-client-urls":["https://192.168.39.75:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.75:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-21T19:28:37.163023Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-21T19:28:37.162263Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.75:2380"}
	{"level":"info","ts":"2024-06-21T19:28:37.163054Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.75:2380"}
	{"level":"info","ts":"2024-06-21T19:28:37.162988Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:29:16 up 2 min,  0 users,  load average: 0.82, 0.41, 0.16
	Linux pause-709611 5.10.207 #1 SMP Fri Jun 21 03:52:19 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6] <==
	I0621 19:28:36.866747       1 options.go:221] external host was not specified, using 192.168.39.75
	I0621 19:28:36.869848       1 server.go:148] Version: v1.30.2
	I0621 19:28:36.869892       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [f69e4f10489b5f4a8393d1680e7287c5869a6a3438e2c7acd9e47fdc3a6a9df3] <==
	I0621 19:28:54.450142       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0621 19:28:54.450165       1 cache.go:39] Caches are synced for autoregister controller
	I0621 19:28:54.450377       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0621 19:28:54.450694       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0621 19:28:54.450769       1 policy_source.go:224] refreshing policies
	I0621 19:28:54.454075       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0621 19:28:54.528827       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0621 19:28:54.532100       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0621 19:28:54.534376       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0621 19:28:54.534440       1 shared_informer.go:320] Caches are synced for configmaps
	I0621 19:28:54.536097       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0621 19:28:54.536189       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0621 19:28:54.551784       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0621 19:28:55.355938       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0621 19:28:56.259875       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0621 19:28:56.276649       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0621 19:28:56.324441       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0621 19:28:56.360684       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0621 19:28:56.368295       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0621 19:29:00.158713       1 trace.go:236] Trace[1660025270]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:98d190e1-71f0-4e4b-8bfc-5610e2ed64fc,client:192.168.39.75,api-group:,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.30.2 (linux/amd64) kubernetes/3968350,verb:POST (21-Jun-2024 19:28:59.532) (total time: 626ms):
	Trace[1660025270]: ["Create etcd3" audit-id:98d190e1-71f0-4e4b-8bfc-5610e2ed64fc,key:/events/default/pause-709611.17db1bb14b1d76ab,type:*core.Event,resource:events 625ms (19:28:59.533)
	Trace[1660025270]:  ---"Txn call succeeded" 625ms (19:29:00.158)]
	Trace[1660025270]: [626.523303ms] [626.523303ms] END
	I0621 19:29:07.522711       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0621 19:29:07.527006       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788] <==
	
	
	==> kube-controller-manager [fb757a14e8f334e96fbed7e4ddf9e1193691b24ac90bd56945a13823ed18128e] <==
	I0621 19:29:07.574170       1 shared_informer.go:320] Caches are synced for GC
	I0621 19:29:07.587693       1 shared_informer.go:320] Caches are synced for job
	I0621 19:29:07.635935       1 shared_informer.go:320] Caches are synced for daemon sets
	I0621 19:29:07.653273       1 shared_informer.go:320] Caches are synced for taint
	I0621 19:29:07.653413       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0621 19:29:07.653533       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-709611"
	I0621 19:29:07.653596       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0621 19:29:07.657677       1 shared_informer.go:320] Caches are synced for attach detach
	I0621 19:29:07.661355       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 19:29:07.665216       1 shared_informer.go:320] Caches are synced for ephemeral
	I0621 19:29:07.686690       1 shared_informer.go:320] Caches are synced for expand
	I0621 19:29:07.686805       1 shared_informer.go:320] Caches are synced for stateful set
	I0621 19:29:07.697793       1 shared_informer.go:320] Caches are synced for PVC protection
	I0621 19:29:07.701174       1 shared_informer.go:320] Caches are synced for persistent volume
	I0621 19:29:07.728122       1 shared_informer.go:320] Caches are synced for disruption
	I0621 19:29:07.730463       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0621 19:29:07.736158       1 shared_informer.go:320] Caches are synced for deployment
	I0621 19:29:07.748684       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0621 19:29:07.748763       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0621 19:29:07.748743       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0621 19:29:07.748995       1 shared_informer.go:320] Caches are synced for resource quota
	I0621 19:29:07.749049       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0621 19:29:08.192702       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 19:29:08.243597       1 shared_informer.go:320] Caches are synced for garbage collector
	I0621 19:29:08.243663       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e] <==
	I0621 19:28:37.519698       1 server_linux.go:69] "Using iptables proxy"
	
	
	==> kube-proxy [fad8c5ff81c08d052e24e69c8b4d9184d3a838cd2c26adb71c485205dd8e5457] <==
	I0621 19:28:55.563035       1 server_linux.go:69] "Using iptables proxy"
	I0621 19:28:55.585111       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.75"]
	I0621 19:28:55.655494       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0621 19:28:55.655559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0621 19:28:55.655579       1 server_linux.go:165] "Using iptables Proxier"
	I0621 19:28:55.659852       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0621 19:28:55.660065       1 server.go:872] "Version info" version="v1.30.2"
	I0621 19:28:55.660088       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:28:55.661564       1 config.go:192] "Starting service config controller"
	I0621 19:28:55.661685       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0621 19:28:55.661751       1 config.go:101] "Starting endpoint slice config controller"
	I0621 19:28:55.661759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0621 19:28:55.662392       1 config.go:319] "Starting node config controller"
	I0621 19:28:55.662421       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0621 19:28:55.762589       1 shared_informer.go:320] Caches are synced for node config
	I0621 19:28:55.762721       1 shared_informer.go:320] Caches are synced for service config
	I0621 19:28:55.762733       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf] <==
	
	
	==> kube-scheduler [f934a8c3fef0a9d187d3b3caa28626790e3e3438edf85d5fe8eb9236699adb58] <==
	I0621 19:28:53.022710       1 serving.go:380] Generated self-signed cert in-memory
	W0621 19:28:54.355421       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0621 19:28:54.355526       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0621 19:28:54.355545       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0621 19:28:54.355556       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0621 19:28:54.429283       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0621 19:28:54.429413       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0621 19:28:54.431449       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0621 19:28:54.431877       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0621 19:28:54.435992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0621 19:28:54.431964       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0621 19:28:54.537315       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.190637    3661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7a3ddf226b2d6c1a4a645ae49425b2d-kubeconfig\") pod \"kube-scheduler-pause-709611\" (UID: \"d7a3ddf226b2d6c1a4a645ae49425b2d\") " pod="kube-system/kube-scheduler-pause-709611"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: E0621 19:28:51.191267    3661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-709611?timeout=10s\": dial tcp 192.168.39.75:8443: connect: connection refused" interval="400ms"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.291062    3661 kubelet_node_status.go:73] "Attempting to register node" node="pause-709611"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: E0621 19:28:51.292214    3661 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.75:8443: connect: connection refused" node="pause-709611"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.439501    3661 scope.go:117] "RemoveContainer" containerID="bc193ffb133fd929a358fd45295a38dd74f0ee1d7bf31e02d11f625211e9db43"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.439853    3661 scope.go:117] "RemoveContainer" containerID="8baf767038d56ac9070d98c044a714b87319e12a20b48ff73b1384df8edbbac6"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.441150    3661 scope.go:117] "RemoveContainer" containerID="d01011176eab3038daf9f5589620be76a4c4b032025e51096d03c3680f337dbf"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.441496    3661 scope.go:117] "RemoveContainer" containerID="5cfc092ef238980872dbff2baa58d701e1a8d21f278152ff4a7d0614c5317788"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: E0621 19:28:51.597934    3661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-709611?timeout=10s\": dial tcp 192.168.39.75:8443: connect: connection refused" interval="800ms"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: I0621 19:28:51.694774    3661 kubelet_node_status.go:73] "Attempting to register node" node="pause-709611"
	Jun 21 19:28:51 pause-709611 kubelet[3661]: E0621 19:28:51.695553    3661 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.75:8443: connect: connection refused" node="pause-709611"
	Jun 21 19:28:52 pause-709611 kubelet[3661]: I0621 19:28:52.497297    3661 kubelet_node_status.go:73] "Attempting to register node" node="pause-709611"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.471411    3661 kubelet_node_status.go:112] "Node was previously registered" node="pause-709611"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.471538    3661 kubelet_node_status.go:76] "Successfully registered node" node="pause-709611"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.473320    3661 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.474438    3661 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.967847    3661 apiserver.go:52] "Watching apiserver"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.971949    3661 topology_manager.go:215] "Topology Admit Handler" podUID="f4afccdb-9436-419e-812f-5d1b8a9eba53" podNamespace="kube-system" podName="kube-proxy-5gg8h"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.972367    3661 topology_manager.go:215] "Topology Admit Handler" podUID="89899309-4a41-4043-b917-9d05815d0a40" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s4tzq"
	Jun 21 19:28:54 pause-709611 kubelet[3661]: I0621 19:28:54.985354    3661 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 21 19:28:55 pause-709611 kubelet[3661]: I0621 19:28:55.001596    3661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4afccdb-9436-419e-812f-5d1b8a9eba53-xtables-lock\") pod \"kube-proxy-5gg8h\" (UID: \"f4afccdb-9436-419e-812f-5d1b8a9eba53\") " pod="kube-system/kube-proxy-5gg8h"
	Jun 21 19:28:55 pause-709611 kubelet[3661]: I0621 19:28:55.001810    3661 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4afccdb-9436-419e-812f-5d1b8a9eba53-lib-modules\") pod \"kube-proxy-5gg8h\" (UID: \"f4afccdb-9436-419e-812f-5d1b8a9eba53\") " pod="kube-system/kube-proxy-5gg8h"
	Jun 21 19:28:55 pause-709611 kubelet[3661]: I0621 19:28:55.273748    3661 scope.go:117] "RemoveContainer" containerID="a30403507079b51f748bee50c01d61a9f3e828794486d339919a1a454c9b7fe3"
	Jun 21 19:28:55 pause-709611 kubelet[3661]: I0621 19:28:55.274049    3661 scope.go:117] "RemoveContainer" containerID="04e1ab86d0e48628c6f5def0fc98d4bf59fb97125ee2adb85fae83821bec9a5e"
	Jun 21 19:28:59 pause-709611 kubelet[3661]: I0621 19:28:59.993314    3661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-709611 -n pause-709611
helpers_test.go:261: (dbg) Run:  kubectl --context pause-709611 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (49.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7200.058s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-357717 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0621 19:39:43.012194   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/enable-default-cni-313995/client.crt: no such file or directory
E0621 19:39:49.524926   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:49.530205   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:49.540466   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:49.560740   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:49.601079   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:49.681462   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:49.842089   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:50.162455   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:50.803671   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:52.084804   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:54.645071   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:39:59.765287   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:40:03.891040   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/flannel-313995/client.crt: no such file or directory
E0621 19:40:10.005578   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:40:23.669968   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/custom-flannel-313995/client.crt: no such file or directory
E0621 19:40:30.486408   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/bridge-313995/client.crt: no such file or directory
E0621 19:40:52.025345   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/auto-313995/client.crt: no such file or directory
E0621 19:40:54.862499   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
E0621 19:41:04.932426   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/enable-default-cni-313995/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (18m9s)
	TestNetworkPlugins/group (5m51s)
	TestStartStop (16m31s)
	TestStartStop/group/default-k8s-diff-port (5m50s)
	TestStartStop/group/default-k8s-diff-port/serial (5m50s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (2m2s)
	TestStartStop/group/embed-certs (6m48s)
	TestStartStop/group/embed-certs/serial (6m48s)
	TestStartStop/group/embed-certs/serial/SecondStart (2m53s)
	TestStartStop/group/no-preload (7m15s)
	TestStartStop/group/no-preload/serial (7m15s)
	TestStartStop/group/no-preload/serial/SecondStart (2m33s)
	TestStartStop/group/old-k8s-version (7m55s)
	TestStartStop/group/old-k8s-version/serial (7m55s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (1m23s)

                                                
                                                
goroutine 3264 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006bcb60, 0xc00089dbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0001360a8, {0x49e9180, 0x2b, 0x2b}, {0xc0001401e0?, 0xc00089dc30?, 0x4aa5900?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0007c40a0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0007c40a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001aff00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 288 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00061b5c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2343 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc0019b4750, 0xc0019b4798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0xe0?, 0xc0019b4750, 0xc0019b4798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0x99b616?, 0xc00173ea80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00173ea80?, 0xc0014ba420?, 0xc0007064e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2363
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 41 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.0/klog.go:1175 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 40
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.0/klog.go:1171 +0x171

                                                
                                                
goroutine 3171 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0019f8ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3131
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2737 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2736
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2976 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001480340, {0x265ce16?, 0x60400000004?}, 0xc001bf8100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001480340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001480340, 0xc001bf8200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2086
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2242 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001fc0de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2220
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2362 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0019e1bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2361
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3241 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7ffb8e9a15e8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b8a120?, 0xc0009efa2b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b8a120, {0xc0009efa2b, 0x5d5, 0x5d5})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00139c030, {0xc0009efa2b?, 0x2199cc0?, 0x22b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001848270, {0x36b4380, 0xc001b88058})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b44c0, 0xc001848270}, {0x36b4380, 0xc001b88058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00139c030?, {0x36b44c0, 0xc001848270})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00139c030, {0x36b44c0, 0xc001848270})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b44c0, 0xc001848270}, {0x36b43e0, 0xc00139c030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001bf8100?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3240
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2622 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2621
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2243 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b803c0, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2220
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3294 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7ffb8e9a19c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001fc1620?, 0xc0013bda69?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001fc1620, {0xc0013bda69, 0x597, 0x597})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001b88240, {0xc0013bda69?, 0x2199cc0?, 0x206?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001398c60, {0x36b4380, 0xc000780750})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b44c0, 0xc001398c60}, {0x36b4380, 0xc000780750}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001b88240?, {0x36b44c0, 0xc001398c60})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001b88240, {0x36b44c0, 0xc001398c60})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b44c0, 0xc001398c60}, {0x36b43e0, 0xc001b88240}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000035b00?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3293
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2032 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00176ba00, 0x315bdb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1650
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3172 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0018204c0, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3131
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2621 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc000093f50, 0xc000093f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0xc0?, 0xc000093f50, 0xc000093f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0xc0015b89c0?, 0x551a40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000093fd0?, 0x592e24?, 0xc0019338c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2626
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2342 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001af6610, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019e1aa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001af6640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001f59570, {0x36b58e0, 0xc001d60690}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f59570, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2363
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1588 [chan receive, 19 minutes]:
testing.(*T).Run(0xc0013f21a0, {0x264fa92?, 0x55125c?}, 0xc00168e0f0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0013f21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0013f21a0, 0x315bb90)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2083 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0013f3380, {0x2651038?, 0x0?}, 0xc00050cc80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013f3380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013f3380, 0xc0019f7240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2032
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 467 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001856160, 0xc0014d58c0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 293
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2952 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b80450, 0xe)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00243a7e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b80480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000786090, {0x36b58e0, 0xc0006d4030}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000786090, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2995
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2277 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00061a2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2276
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2086 [chan receive, 6 minutes]:
testing.(*T).Run(0xc00166c000, {0x2651038?, 0x0?}, 0xc001bf8200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00166c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00166c000, 0xc0019f7340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2032
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2813 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000887010, 0xf)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001a55e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000887080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001c2c7d0, {0x36b58e0, 0xc00177df50}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001c2c7d0, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2810
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3232 [syscall, 2 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x1298b, 0xc000c1cab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001b7e150)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001b7e150)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c52000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000c52000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014804e0, 0xc000c52000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36d9650, 0xc0007023f0}, 0xc0014804e0, {0xc000c5b320, 0x11}, {0x0?, 0xc001486f60?}, {0x551113?, 0x4a16ef?}, {0xc001d56000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0014804e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0014804e0, 0xc000034800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2861
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2033 [chan receive, 7 minutes]:
testing.(*T).Run(0xc00176bd40, {0x2651038?, 0x0?}, 0xc0017b8000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00176bd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00176bd40, 0xc0019f71c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2032
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 227 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7ffb8e9a1ea0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00167a000)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00167a000)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000c08580)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000c08580)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0006540f0, {0x36cc660, 0xc000c08580})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0006540f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x36d95a8?, 0xc0013f2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 176
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2861 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001bc9040, {0x265ce16?, 0x60400000004?}, 0xc000034800)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001bc9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001bc9040, 0xc0017b8800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2084
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3109 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001e0dec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3156
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2278 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019f7100, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2276
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3147 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3146
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2257 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2256
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2233 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc001691f50, 0xc0015bcf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0x10?, 0xc001691f50, 0xc001691f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0xc0014801a0?, 0x551a40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001691fd0?, 0x592e24?, 0xc00167c000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2243
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2620 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b80650, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019f8c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b80680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00139a7c0, {0x36b58e0, 0xc001484c30}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00139a7c0, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2626
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 289 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c06a40, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3299 [syscall, 2 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x12a85, 0xc000c4aab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0019902a0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0019902a0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0018522c0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0018522c0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc001bc8d00, 0xc0018522c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36d9650, 0xc0004803f0}, 0xc001bc8d00, {0xc000892860, 0x1c}, {0x0?, 0xc00148d760?}, {0x551113?, 0x4a16ef?}, {0xc000744800, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001bc8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001bc8d00, 0xc0017a8200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3029
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2255 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0019f70d0, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000140b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019f7100)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004e4370, {0x36b58e0, 0xc0013d0db0}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004e4370, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2278
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 451 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014d1a20, 0xc000061f80)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 450
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2344 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2343
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3293 [syscall, 2 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x12bbe, 0xc0015d5ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001b7e720)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001b7e720)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c52840)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000c52840)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014809c0, 0xc000c52840)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36d9650, 0xc0007020e0}, 0xc0014809c0, {0xc000c5a1f8, 0x16}, {0x0?, 0xc001697f60?}, {0x551113?, 0x4a16ef?}, {0xc000223200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0014809c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0014809c0, 0xc000035b00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2728
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3296 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c52840, 0xc001f5c9c0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3293
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2256 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc0019b5750, 0xc0019b5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0xd0?, 0xc0019b5750, 0xc0019b5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0xc001eef2c0?, 0xc000704b80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0019b57d0?, 0x7b6565?, 0xc001fb6900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2278
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 332 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000c06a10, 0x24)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00061b4a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c06a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c02740, {0x36b58e0, 0xc00087bc50}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000c02740, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 289
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 333 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc000095750, 0xc0000acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0x60?, 0xc000095750, 0xc000095798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000957d0?, 0x592e24?, 0xc0014d4660?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 289
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 334 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 333
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2561 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0019f8e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2616
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2814 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc000507f50, 0xc000507f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0x7?, 0xc000507f50, 0xc000507f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0xc001480680?, 0x551a40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000507fd0?, 0x592e24?, 0xc001398780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2810
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2363 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001af6640, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2361
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2084 [chan receive, 7 minutes]:
testing.(*T).Run(0xc0013f3520, {0x2651038?, 0x0?}, 0xc0017b8800)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013f3520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013f3520, 0xc0019f7280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2032
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2626 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b80680, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2616
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1607 [chan receive, 6 minutes]:
testing.(*testContext).waitParallel(0xc0004b7680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc0006bcea0, 0xc00168e0f0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1588
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 531 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c0a6e0, 0xc0019a9e00)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 530
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2728 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001bc8b60, {0x265ce16?, 0x60400000004?}, 0xc000035b00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001bc8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001bc8b60, 0xc0017b8000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2033
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1650 [chan receive, 17 minutes]:
testing.(*T).Run(0xc0013f31e0, {0x264fa92?, 0x551113?}, 0x315bdb0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0013f31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0013f31e0, 0x315bbd8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2815 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2814
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2954 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2953
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3135 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001820490, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019f89c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0018204c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00077a7b0, {0x36b58e0, 0xc000d125a0}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00077a7b0, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2232 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001b80390, 0x11)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001fc0cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001c2c010, {0x36b58e0, 0xc00051d740}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001c2c010, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2243
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2953 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc001692750, 0xc001692798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0x80?, 0xc001692750, 0xc001692798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0x99b601?, 0xc000060c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592dc5?, 0xc000c52000?, 0xc001e1e180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2995
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2082 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0004b7680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013f2b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013f2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013f2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0013f2b60, 0xc0019f7200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2032
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3300 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7ffb8e9a17d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00189aa80?, 0xc000b78211?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00189aa80, {0xc000b78211, 0x5ef, 0x5ef})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000780268, {0xc000b78211?, 0x2199cc0?, 0x211?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002086570, {0x36b4380, 0xc0007820a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b44c0, 0xc002086570}, {0x36b4380, 0xc0007820a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000780268?, {0x36b44c0, 0xc002086570})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000780268, {0x36b44c0, 0xc002086570})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b44c0, 0xc002086570}, {0x36b43e0, 0xc000780268}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0017a8200?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3299
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2810 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000887080, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2808
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3145 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000c075d0, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001e0dda0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c07600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00197d360, {0x36b58e0, 0xc00077c390}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00197d360, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3110
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3240 [syscall, 2 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x128b7, 0xc0015bbab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0015e41b0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0015e41b0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0019e4000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0019e4000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00166c1a0, 0xc0019e4000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36d9650, 0xc00055c070}, 0xc00166c1a0, {0xc001886060, 0x12}, {0x0?, 0xc002775760?}, {0x551113?, 0x4a16ef?}, {0xc001438000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00166c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00166c1a0, 0xc001bf8100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2976
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3242 [IO wait]:
internal/poll.runtime_pollWait(0x7ffb8c5038b8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b8a1e0?, 0xc0014b89ce?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b8a1e0, {0xc0014b89ce, 0x3632, 0x3632})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00139c058, {0xc0014b89ce?, 0xc0007020e0?, 0x3e65?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018482a0, {0x36b4380, 0xc0007800e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b44c0, 0xc0018482a0}, {0x36b4380, 0xc0007800e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00139c058?, {0x36b44c0, 0xc0018482a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00139c058, {0x36b44c0, 0xc0018482a0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b44c0, 0xc0018482a0}, {0x36b43e0, 0xc00139c058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00167a180?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3240
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 662 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc001e24d80)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 660
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 663 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc001e24d80)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 660
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3029 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000c50ea0, {0x265ce16?, 0x60400000004?}, 0xc0017a8200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000c50ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000c50ea0, 0xc00050cc80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2083
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2234 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2233
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2995 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b80480, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2987
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2809 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001a55f20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2808
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3146 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc00148a750, 0xc0000abf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0xd3?, 0xc00148a750, 0xc00148a798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0xc001d31c20?, 0xc001d31c20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001bba3f0?, 0x0?, 0xc001f592c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3110
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3302 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018522c0, 0xc0019a8a20)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3299
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3301 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7ffb8c503aa8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00189ab40?, 0xc000c64b72?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00189ab40, {0xc000c64b72, 0x148e, 0x148e})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000780298, {0xc000c64b72?, 0x746e6f632d726961?, 0x2000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0020865a0, {0x36b4380, 0xc00139c0a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b44c0, 0xc0020865a0}, {0x36b4380, 0xc00139c0a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000780298?, {0x36b44c0, 0xc0020865a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000780298, {0x36b44c0, 0xc0020865a0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b44c0, 0xc0020865a0}, {0x36b43e0, 0xc000780298}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x736f705d2b5b090a?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3299
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3243 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019e4000, 0xc0007063c0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3240
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2736 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc000509f50, 0xc0015d2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0x80?, 0xc000509f50, 0xc000509f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0xc001480340?, 0x551a40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592dc5?, 0xc000c2a420?, 0xc001e1e780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2751
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2751 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000075240, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2746
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2750 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001e2a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2746
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2735 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000075210, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f160?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001e2a2a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000075240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001bfe460, {0x36b58e0, 0xc002086300}, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001bfe460, 0x3b9aca00, 0x0, 0x1, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2751
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2994 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00243a900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2987
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3295 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7ffb8e9a13f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001fc16e0?, 0xc001492ba8?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001fc16e0, {0xc001492ba8, 0x1458, 0x1458})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001b88258, {0xc001492ba8?, 0xc000095d30?, 0x2000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001398c90, {0x36b4380, 0xc000782498})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b44c0, 0xc001398c90}, {0x36b4380, 0xc000782498}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001b88258?, {0x36b44c0, 0xc001398c90})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001b88258, {0x36b44c0, 0xc001398c90})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b44c0, 0xc001398c90}, {0x36b43e0, 0xc001b88258}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000706660?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3293
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3137 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3136
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3136 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d9810, 0xc000060c60}, 0xc001696750, 0xc001696798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d9810, 0xc000060c60}, 0x0?, 0xc001696750, 0xc001696798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d9810?, 0xc000060c60?}, 0xc0013f3601?, 0xc000060c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0xc000c52401?, 0xc000060c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3110 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c07600, 0xc000060c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3156
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3233 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7ffb8e9a1300, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001fc0060?, 0xc0014125f0?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001fc0060, {0xc0014125f0, 0x210, 0x210})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001b88010, {0xc0014125f0?, 0x7ffb8d7b9ee8?, 0x43?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001398270, {0x36b4380, 0xc000782008})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b44c0, 0xc001398270}, {0x36b4380, 0xc000782008}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001b88010?, {0x36b44c0, 0xc001398270})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001b88010, {0x36b44c0, 0xc001398270})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b44c0, 0xc001398270}, {0x36b43e0, 0xc001b88010}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000034800?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3232
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3282 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7ffb8e9a14f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001fc0120?, 0xc000c240ed?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001fc0120, {0xc000c240ed, 0x3f13, 0x3f13})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001b88028, {0xc000c240ed?, 0x6f636562756b2d6f?, 0x3e28?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013982a0, {0x36b4380, 0xc000782020})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b44c0, 0xc0013982a0}, {0x36b4380, 0xc000782020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001b88028?, {0x36b44c0, 0xc0013982a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001b88028, {0x36b44c0, 0xc0013982a0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b44c0, 0xc0013982a0}, {0x36b43e0, 0xc001b88028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x6458535446305144?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3232
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3283 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c52000, 0xc001f5c180)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3232
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                    

Test pass (150/203)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.41
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.2/json-events 11.42
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.52
18 TestDownloadOnly/v1.30.2/DeleteAll 0.13
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.56
22 TestOffline 61.89
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
28 TestCertOptions 87.21
29 TestCertExpiration 278.42
31 TestForceSystemdFlag 83.01
32 TestForceSystemdEnv 86.24
34 TestKVMDriverInstallOrUpdate 3.59
38 TestErrorSpam/setup 39.17
39 TestErrorSpam/start 0.33
40 TestErrorSpam/status 0.69
41 TestErrorSpam/pause 1.52
42 TestErrorSpam/unpause 1.52
43 TestErrorSpam/stop 4.49
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 92.22
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 61.25
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.6
55 TestFunctional/serial/CacheCmd/cache/add_local 2.11
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
61 TestFunctional/serial/MinikubeKubectlCmd 0.1
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
63 TestFunctional/serial/ExtraConfig 34.55
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 1.36
66 TestFunctional/serial/LogsFileCmd 1.32
67 TestFunctional/serial/InvalidService 4.1
69 TestFunctional/parallel/ConfigCmd 0.3
70 TestFunctional/parallel/DashboardCmd 13.7
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.14
73 TestFunctional/parallel/StatusCmd 0.89
77 TestFunctional/parallel/ServiceCmdConnect 7.76
78 TestFunctional/parallel/AddonsCmd 0.12
79 TestFunctional/parallel/PersistentVolumeClaim 45.02
81 TestFunctional/parallel/SSHCmd 0.42
82 TestFunctional/parallel/CpCmd 1.3
83 TestFunctional/parallel/MySQL 29.6
84 TestFunctional/parallel/FileSync 0.21
85 TestFunctional/parallel/CertSync 1.19
89 TestFunctional/parallel/NodeLabels 0.07
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
93 TestFunctional/parallel/License 0.57
94 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
105 TestFunctional/parallel/ProfileCmd/profile_list 0.37
106 TestFunctional/parallel/MountCmd/any-port 8.74
107 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
108 TestFunctional/parallel/MountCmd/specific-port 1.9
109 TestFunctional/parallel/ServiceCmd/List 0.42
110 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
111 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
112 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
113 TestFunctional/parallel/ServiceCmd/Format 0.38
114 TestFunctional/parallel/ServiceCmd/URL 0.3
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.78
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.11
122 TestFunctional/parallel/ImageCommands/Setup 2.18
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.06
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.76
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 12.79
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.92
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.61
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
133 TestFunctional/delete_addon-resizer_images 0.07
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
143 TestMultiControlPlane/serial/NodeLabels 0.06
157 TestJSONOutput/start/Command 95.6
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.72
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.61
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 7.34
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.19
185 TestMainNoArgs 0.04
186 TestMinikubeProfile 87.77
189 TestMountStart/serial/StartWithMountFirst 25.83
190 TestMountStart/serial/VerifyMountFirst 0.36
191 TestMountStart/serial/StartWithMountSecond 26.85
192 TestMountStart/serial/VerifyMountSecond 0.36
193 TestMountStart/serial/DeleteFirst 0.67
194 TestMountStart/serial/VerifyMountPostDelete 0.37
195 TestMountStart/serial/Stop 1.27
196 TestMountStart/serial/RestartStopped 22.61
197 TestMountStart/serial/VerifyMountPostStop 0.38
200 TestMultiNode/serial/FreshStart2Nodes 96.32
201 TestMultiNode/serial/DeployApp2Nodes 5.56
202 TestMultiNode/serial/PingHostFrom2Pods 0.77
203 TestMultiNode/serial/AddNode 36.27
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.21
206 TestMultiNode/serial/CopyFile 6.94
207 TestMultiNode/serial/StopNode 2.2
208 TestMultiNode/serial/StartAfterStop 27.17
210 TestMultiNode/serial/DeleteNode 2.06
212 TestMultiNode/serial/RestartMultiNode 167.52
213 TestMultiNode/serial/ValidateNameConflict 41.75
220 TestScheduledStopUnix 109.78
224 TestRunningBinaryUpgrade 227
229 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
233 TestNoKubernetes/serial/StartWithK8s 71.96
242 TestNoKubernetes/serial/StartWithStopK8s 65.22
250 TestNoKubernetes/serial/Start 49.7
251 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
252 TestNoKubernetes/serial/ProfileList 30.19
253 TestNoKubernetes/serial/Stop 2.7
254 TestNoKubernetes/serial/StartNoArgs 21.4
256 TestPause/serial/Start 105.23
257 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
258 TestStoppedBinaryUpgrade/Setup 2.28
259 TestStoppedBinaryUpgrade/Upgrade 135.09
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
x
+
TestDownloadOnly/v1.20.0/json-events (23.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-917171 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-917171 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.409986249s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-917171
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-917171: exit status 85 (56.267402ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-917171 | jenkins | v1.33.1 | 21 Jun 24 17:41 UTC |          |
	|         | -p download-only-917171        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 17:41:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 17:41:05.942684   15341 out.go:291] Setting OutFile to fd 1 ...
	I0621 17:41:05.942943   15341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 17:41:05.942954   15341 out.go:304] Setting ErrFile to fd 2...
	I0621 17:41:05.942961   15341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 17:41:05.943142   15341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	W0621 17:41:05.943266   15341 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19112-8111/.minikube/config/config.json: open /home/jenkins/minikube-integration/19112-8111/.minikube/config/config.json: no such file or directory
	I0621 17:41:05.943797   15341 out.go:298] Setting JSON to true
	I0621 17:41:05.944713   15341 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1364,"bootTime":1718990302,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 17:41:05.944773   15341 start.go:139] virtualization: kvm guest
	I0621 17:41:05.947304   15341 out.go:97] [download-only-917171] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 17:41:05.947411   15341 notify.go:220] Checking for updates...
	W0621 17:41:05.947438   15341 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball: no such file or directory
	I0621 17:41:05.948997   15341 out.go:169] MINIKUBE_LOCATION=19112
	I0621 17:41:05.950519   15341 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 17:41:05.952096   15341 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 17:41:05.953419   15341 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 17:41:05.954865   15341 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0621 17:41:05.957314   15341 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0621 17:41:05.957586   15341 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 17:41:06.061038   15341 out.go:97] Using the kvm2 driver based on user configuration
	I0621 17:41:06.061062   15341 start.go:297] selected driver: kvm2
	I0621 17:41:06.061072   15341 start.go:901] validating driver "kvm2" against <nil>
	I0621 17:41:06.061411   15341 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 17:41:06.061542   15341 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 17:41:06.076361   15341 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 17:41:06.076429   15341 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 17:41:06.077124   15341 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0621 17:41:06.077341   15341 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0621 17:41:06.077415   15341 cni.go:84] Creating CNI manager for ""
	I0621 17:41:06.077446   15341 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 17:41:06.077454   15341 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0621 17:41:06.077528   15341 start.go:340] cluster config:
	{Name:download-only-917171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-917171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 17:41:06.077757   15341 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 17:41:06.079945   15341 out.go:97] Downloading VM boot image ...
	I0621 17:41:06.079986   15341 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/iso/amd64/minikube-v1.33.1-1718923868-19112-amd64.iso
	I0621 17:41:15.421431   15341 out.go:97] Starting "download-only-917171" primary control-plane node in "download-only-917171" cluster
	I0621 17:41:15.421461   15341 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0621 17:41:15.525125   15341 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0621 17:41:15.525154   15341 cache.go:56] Caching tarball of preloaded images
	I0621 17:41:15.525419   15341 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0621 17:41:15.527511   15341 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0621 17:41:15.527555   15341 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0621 17:41:15.624599   15341 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-917171 host does not exist
	  To start a cluster, run: "minikube start -p download-only-917171"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-917171
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (11.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-637010 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-637010 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.415378778s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (11.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-637010
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-637010: exit status 85 (520.563632ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-917171 | jenkins | v1.33.1 | 21 Jun 24 17:41 UTC |                     |
	|         | -p download-only-917171        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 21 Jun 24 17:41 UTC | 21 Jun 24 17:41 UTC |
	| delete  | -p download-only-917171        | download-only-917171 | jenkins | v1.33.1 | 21 Jun 24 17:41 UTC | 21 Jun 24 17:41 UTC |
	| start   | -o=json --download-only        | download-only-637010 | jenkins | v1.33.1 | 21 Jun 24 17:41 UTC |                     |
	|         | -p download-only-637010        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/21 17:41:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0621 17:41:29.678835   15559 out.go:291] Setting OutFile to fd 1 ...
	I0621 17:41:29.679126   15559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 17:41:29.679137   15559 out.go:304] Setting ErrFile to fd 2...
	I0621 17:41:29.679144   15559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 17:41:29.679309   15559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 17:41:29.679933   15559 out.go:298] Setting JSON to true
	I0621 17:41:29.680791   15559 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1388,"bootTime":1718990302,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 17:41:29.680860   15559 start.go:139] virtualization: kvm guest
	I0621 17:41:29.683271   15559 out.go:97] [download-only-637010] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 17:41:29.683455   15559 notify.go:220] Checking for updates...
	I0621 17:41:29.684811   15559 out.go:169] MINIKUBE_LOCATION=19112
	I0621 17:41:29.686358   15559 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 17:41:29.687699   15559 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 17:41:29.689172   15559 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 17:41:29.690460   15559 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0621 17:41:29.692673   15559 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0621 17:41:29.692901   15559 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 17:41:29.724865   15559 out.go:97] Using the kvm2 driver based on user configuration
	I0621 17:41:29.724894   15559 start.go:297] selected driver: kvm2
	I0621 17:41:29.724909   15559 start.go:901] validating driver "kvm2" against <nil>
	I0621 17:41:29.725342   15559 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 17:41:29.725464   15559 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19112-8111/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0621 17:41:29.741769   15559 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0621 17:41:29.741843   15559 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0621 17:41:29.742361   15559 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0621 17:41:29.742518   15559 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0621 17:41:29.742579   15559 cni.go:84] Creating CNI manager for ""
	I0621 17:41:29.742595   15559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0621 17:41:29.742608   15559 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0621 17:41:29.742669   15559 start.go:340] cluster config:
	{Name:download-only-637010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-637010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 17:41:29.742769   15559 iso.go:125] acquiring lock: {Name:mk9bcacef563c74661da696f2e2fb4463daf80f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0621 17:41:29.744447   15559 out.go:97] Starting "download-only-637010" primary control-plane node in "download-only-637010" cluster
	I0621 17:41:29.744463   15559 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 17:41:29.841076   15559 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 17:41:29.841099   15559 cache.go:56] Caching tarball of preloaded images
	I0621 17:41:29.841246   15559 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0621 17:41:29.843170   15559 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0621 17:41:29.843189   15559 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0621 17:41:29.944601   15559 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:cd14409e225276132db5cf7d5d75c2d2 -> /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0621 17:41:39.474731   15559 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0621 17:41:39.474818   15559 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19112-8111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-637010 host does not exist
	  To start a cluster, run: "minikube start -p download-only-637010"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-637010
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-459678 --alsologtostderr --binary-mirror http://127.0.0.1:34189 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-459678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-459678
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (61.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-246112 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-246112 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.585606955s)
helpers_test.go:175: Cleaning up "offline-crio-246112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-246112
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-246112: (1.307092559s)
--- PASS: TestOffline (61.89s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-299362
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-299362: exit status 85 (47.107657ms)

                                                
                                                
-- stdout --
	* Profile "addons-299362" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-299362"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-299362
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-299362: exit status 85 (47.13592ms)

                                                
                                                
-- stdout --
	* Profile "addons-299362" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-299362"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (87.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-912751 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-912751 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m25.750535947s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-912751 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-912751 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-912751 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-912751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-912751
--- PASS: TestCertOptions (87.21s)

                                                
                                    
x
+
TestCertExpiration (278.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-843358 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-843358 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m19.081478843s)
E0621 19:25:54.862098   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-843358 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-843358 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (18.491530932s)
helpers_test.go:175: Cleaning up "cert-expiration-843358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-843358
--- PASS: TestCertExpiration (278.42s)

                                                
                                    
x
+
TestForceSystemdFlag (83.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-352820 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-352820 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m21.987919628s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-352820 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-352820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-352820
--- PASS: TestForceSystemdFlag (83.01s)

                                                
                                    
x
+
TestForceSystemdEnv (86.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-170896 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-170896 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m25.232350279s)
helpers_test.go:175: Cleaning up "force-systemd-env-170896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-170896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-170896: (1.004806139s)
--- PASS: TestForceSystemdEnv (86.24s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.59s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.59s)

                                                
                                    
x
+
TestErrorSpam/setup (39.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-868483 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-868483 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-868483 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-868483 --driver=kvm2  --container-runtime=crio: (39.170589869s)
--- PASS: TestErrorSpam/setup (39.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (4.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 stop: (1.535544475s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 stop: (1.729893246s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-868483 --log_dir /tmp/nospam-868483 stop: (1.224110036s)
--- PASS: TestErrorSpam/stop (4.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19112-8111/.minikube/files/etc/test/nested/copy/15329/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (92.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-620822 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-620822 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m32.21889657s)
--- PASS: TestFunctional/serial/StartWithProxy (92.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (61.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-620822 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-620822 --alsologtostderr -v=8: (1m1.248311014s)
functional_test.go:659: soft start took 1m1.249071057s for "functional-620822" cluster.
--- PASS: TestFunctional/serial/SoftStart (61.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-620822 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 cache add registry.k8s.io/pause:3.1: (1.184281723s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 cache add registry.k8s.io/pause:3.3: (1.219351225s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 cache add registry.k8s.io/pause:latest: (1.199375365s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-620822 /tmp/TestFunctionalserialCacheCmdcacheadd_local3972213111/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 cache add minikube-local-cache-test:functional-620822
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 cache add minikube-local-cache-test:functional-620822: (1.77477379s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 cache delete minikube-local-cache-test:functional-620822
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-620822
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.044855ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 kubectl -- --context functional-620822 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-620822 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-620822 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-620822 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.551750793s)
functional_test.go:757: restart took 34.551855847s for "functional-620822" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-620822 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 logs: (1.363375445s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 logs --file /tmp/TestFunctionalserialLogsFileCmd812486146/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 logs --file /tmp/TestFunctionalserialLogsFileCmd812486146/001/logs.txt: (1.319762912s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-620822 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-620822
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-620822: exit status 115 (264.779082ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.117:30380 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-620822 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 config get cpus: exit status 14 (50.445528ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 config get cpus: exit status 14 (45.978421ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-620822 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-620822 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28030: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-620822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-620822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.095781ms)

                                                
                                                
-- stdout --
	* [functional-620822] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:25:57.233879   27798 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:25:57.234173   27798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:25:57.234183   27798 out.go:304] Setting ErrFile to fd 2...
	I0621 18:25:57.234190   27798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:25:57.234419   27798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:25:57.235025   27798 out.go:298] Setting JSON to false
	I0621 18:25:57.235952   27798 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4055,"bootTime":1718990302,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:25:57.236013   27798 start.go:139] virtualization: kvm guest
	I0621 18:25:57.238397   27798 out.go:177] * [functional-620822] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0621 18:25:57.240292   27798 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:25:57.240298   27798 notify.go:220] Checking for updates...
	I0621 18:25:57.241907   27798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:25:57.243399   27798 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:25:57.244736   27798 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:25:57.246278   27798 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:25:57.247761   27798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:25:57.249729   27798 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:25:57.250402   27798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:25:57.250466   27798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:25:57.265133   27798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0621 18:25:57.265537   27798 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:25:57.266104   27798 main.go:141] libmachine: Using API Version  1
	I0621 18:25:57.266135   27798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:25:57.266481   27798 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:25:57.266689   27798 main.go:141] libmachine: (functional-620822) Calling .DriverName
	I0621 18:25:57.266939   27798 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:25:57.267245   27798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:25:57.267292   27798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:25:57.281325   27798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36871
	I0621 18:25:57.281706   27798 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:25:57.282334   27798 main.go:141] libmachine: Using API Version  1
	I0621 18:25:57.282366   27798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:25:57.282724   27798 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:25:57.282954   27798 main.go:141] libmachine: (functional-620822) Calling .DriverName
	I0621 18:25:57.315073   27798 out.go:177] * Using the kvm2 driver based on existing profile
	I0621 18:25:57.316350   27798 start.go:297] selected driver: kvm2
	I0621 18:25:57.316370   27798 start.go:901] validating driver "kvm2" against &{Name:functional-620822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-620822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:25:57.316471   27798 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:25:57.318536   27798 out.go:177] 
	W0621 18:25:57.319773   27798 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0621 18:25:57.320927   27798 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-620822 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-620822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-620822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.627204ms)

                                                
                                                
-- stdout --
	* [functional-620822] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 18:25:57.515977   27871 out.go:291] Setting OutFile to fd 1 ...
	I0621 18:25:57.516079   27871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:25:57.516090   27871 out.go:304] Setting ErrFile to fd 2...
	I0621 18:25:57.516095   27871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 18:25:57.516388   27871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 18:25:57.516891   27871 out.go:298] Setting JSON to false
	I0621 18:25:57.517849   27871 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4055,"bootTime":1718990302,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0621 18:25:57.517904   27871 start.go:139] virtualization: kvm guest
	I0621 18:25:57.520171   27871 out.go:177] * [functional-620822] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0621 18:25:57.522147   27871 notify.go:220] Checking for updates...
	I0621 18:25:57.522176   27871 out.go:177]   - MINIKUBE_LOCATION=19112
	I0621 18:25:57.523415   27871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0621 18:25:57.524760   27871 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	I0621 18:25:57.526199   27871 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	I0621 18:25:57.527417   27871 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0621 18:25:57.528726   27871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0621 18:25:57.530369   27871 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 18:25:57.530793   27871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:25:57.530858   27871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:25:57.546358   27871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0621 18:25:57.546857   27871 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:25:57.547362   27871 main.go:141] libmachine: Using API Version  1
	I0621 18:25:57.547383   27871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:25:57.547843   27871 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:25:57.548034   27871 main.go:141] libmachine: (functional-620822) Calling .DriverName
	I0621 18:25:57.548268   27871 driver.go:392] Setting default libvirt URI to qemu:///system
	I0621 18:25:57.548553   27871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 18:25:57.548586   27871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 18:25:57.567235   27871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41537
	I0621 18:25:57.567668   27871 main.go:141] libmachine: () Calling .GetVersion
	I0621 18:25:57.568131   27871 main.go:141] libmachine: Using API Version  1
	I0621 18:25:57.568152   27871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 18:25:57.568455   27871 main.go:141] libmachine: () Calling .GetMachineName
	I0621 18:25:57.568644   27871 main.go:141] libmachine: (functional-620822) Calling .DriverName
	I0621 18:25:57.600218   27871 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0621 18:25:57.601538   27871 start.go:297] selected driver: kvm2
	I0621 18:25:57.601551   27871 start.go:901] validating driver "kvm2" against &{Name:functional-620822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19112/minikube-v1.33.1-1718923868-19112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718923403-19112@sha256:cc061048d931d84aa4a945fb4686882929674aeba8a6ed833c4fb3a3c2b6805e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-620822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0621 18:25:57.601660   27871 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0621 18:25:57.603922   27871 out.go:177] 
	W0621 18:25:57.605119   27871 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0621 18:25:57.606298   27871 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-620822 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-620822 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-hrcsv" [ff792025-de16-4804-bd89-9068fde6fdc7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-hrcsv" [ff792025-de16-4804-bd89-9068fde6fdc7] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.254148708s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.117:32641
functional_test.go:1671: http://192.168.39.117:32641: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-hrcsv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.117:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.117:32641
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0a654752-7f9a-4a8c-8ada-fd2ceb12d40a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003646415s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-620822 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-620822 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-620822 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-620822 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [05202108-6b2a-437a-a40b-dadb3f6f7ea7] Pending
helpers_test.go:344: "sp-pod" [05202108-6b2a-437a-a40b-dadb3f6f7ea7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [05202108-6b2a-437a-a40b-dadb3f6f7ea7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004270224s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-620822 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-620822 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-620822 delete -f testdata/storage-provisioner/pod.yaml: (2.186263421s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-620822 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6fd79b00-8471-4fe6-a3c4-b22f803e6c7b] Pending
helpers_test.go:344: "sp-pod" [6fd79b00-8471-4fe6-a3c4-b22f803e6c7b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6fd79b00-8471-4fe6-a3c4-b22f803e6c7b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004671002s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-620822 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh -n functional-620822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 cp functional-620822:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2732386945/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh -n functional-620822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh -n functional-620822 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-620822 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-hdfjq" [417b6f09-7f6c-4c65-8539-3431e33e05a0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-hdfjq" [417b6f09-7f6c-4c65-8539-3431e33e05a0] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.007227683s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-620822 exec mysql-64454c8b5c-hdfjq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-620822 exec mysql-64454c8b5c-hdfjq -- mysql -ppassword -e "show databases;": exit status 1 (183.60309ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-620822 exec mysql-64454c8b5c-hdfjq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-620822 exec mysql-64454c8b5c-hdfjq -- mysql -ppassword -e "show databases;": exit status 1 (148.349189ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-620822 exec mysql-64454c8b5c-hdfjq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15329/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo cat /etc/test/nested/copy/15329/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15329.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo cat /etc/ssl/certs/15329.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15329.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo cat /usr/share/ca-certificates/15329.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/153292.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo cat /etc/ssl/certs/153292.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/153292.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo cat /usr/share/ca-certificates/153292.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-620822 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 ssh "sudo systemctl is-active docker": exit status 1 (196.554714ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 ssh "sudo systemctl is-active containerd": exit status 1 (197.660862ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-620822 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-620822 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-ltldx" [a2cbc87e-5693-47e8-9abc-81d092001dec] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-ltldx" [a2cbc87e-5693-47e8-9abc-81d092001dec] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003927548s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "322.859908ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "45.682499ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdany-port2796180980/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718994356529242793" to /tmp/TestFunctionalparallelMountCmdany-port2796180980/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718994356529242793" to /tmp/TestFunctionalparallelMountCmdany-port2796180980/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718994356529242793" to /tmp/TestFunctionalparallelMountCmdany-port2796180980/001/test-1718994356529242793
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.176403ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 21 18:25 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 21 18:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 21 18:25 test-1718994356529242793
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh cat /mount-9p/test-1718994356529242793
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-620822 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e31ae874-261c-49c3-a4b3-c85cc8c651ac] Pending
helpers_test.go:344: "busybox-mount" [e31ae874-261c-49c3-a4b3-c85cc8c651ac] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e31ae874-261c-49c3-a4b3-c85cc8c651ac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e31ae874-261c-49c3-a4b3-c85cc8c651ac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003628738s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-620822 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdany-port2796180980/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "250.489219ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "51.440088ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdspecific-port2520658069/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.290471ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdspecific-port2520658069/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 ssh "sudo umount -f /mount-9p": exit status 1 (239.458615ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-620822 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdspecific-port2520658069/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 service list -o json
functional_test.go:1490: Took "458.165119ms" to run "out/minikube-linux-amd64 -p functional-620822 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.117:32328
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup409223334/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup409223334/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup409223334/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T" /mount1: exit status 1 (304.27946ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-620822 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup409223334/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup409223334/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-620822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup409223334/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.117:32328
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-620822 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-620822
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-620822
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240513-cd2ac642
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-620822 image ls --format short --alsologtostderr:
I0621 18:26:37.550488   29728 out.go:291] Setting OutFile to fd 1 ...
I0621 18:26:37.550628   29728 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:37.550639   29728 out.go:304] Setting ErrFile to fd 2...
I0621 18:26:37.550643   29728 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:37.550838   29728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
I0621 18:26:37.551382   29728 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:37.551497   29728 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:37.551847   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:37.551890   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:37.566225   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
I0621 18:26:37.567673   29728 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:37.568564   29728 main.go:141] libmachine: Using API Version  1
I0621 18:26:37.568594   29728 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:37.569090   29728 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:37.569324   29728 main.go:141] libmachine: (functional-620822) Calling .GetState
I0621 18:26:37.571852   29728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:37.571901   29728 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:37.586198   29728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34863
I0621 18:26:37.586562   29728 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:37.587121   29728 main.go:141] libmachine: Using API Version  1
I0621 18:26:37.587138   29728 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:37.587410   29728 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:37.587561   29728 main.go:141] libmachine: (functional-620822) Calling .DriverName
I0621 18:26:37.587824   29728 ssh_runner.go:195] Run: systemctl --version
I0621 18:26:37.587849   29728 main.go:141] libmachine: (functional-620822) Calling .GetSSHHostname
I0621 18:26:37.591026   29728 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:37.591519   29728 main.go:141] libmachine: (functional-620822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:10:a6", ip: ""} in network mk-functional-620822: {Iface:virbr1 ExpiryTime:2024-06-21 19:22:45 +0000 UTC Type:0 Mac:52:54:00:fb:10:a6 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-620822 Clientid:01:52:54:00:fb:10:a6}
I0621 18:26:37.591543   29728 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined IP address 192.168.39.117 and MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:37.591707   29728 main.go:141] libmachine: (functional-620822) Calling .GetSSHPort
I0621 18:26:37.591828   29728 main.go:141] libmachine: (functional-620822) Calling .GetSSHKeyPath
I0621 18:26:37.592017   29728 main.go:141] libmachine: (functional-620822) Calling .GetSSHUsername
I0621 18:26:37.592137   29728 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/functional-620822/id_rsa Username:docker}
I0621 18:26:37.693686   29728 ssh_runner.go:195] Run: sudo crictl images --output json
I0621 18:26:37.785744   29728 main.go:141] libmachine: Making call to close driver server
I0621 18:26:37.785760   29728 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:37.786395   29728 main.go:141] libmachine: (functional-620822) DBG | Closing plugin on server side
I0621 18:26:37.786416   29728 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:37.786438   29728 main.go:141] libmachine: Making call to close connection to plugin binary
I0621 18:26:37.786457   29728 main.go:141] libmachine: Making call to close driver server
I0621 18:26:37.786471   29728 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:37.786714   29728 main.go:141] libmachine: (functional-620822) DBG | Closing plugin on server side
I0621 18:26:37.786747   29728 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:37.786760   29728 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-620822 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-620822  | 2a633b1513040 | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.30.2            | 53c535741fb44 | 86MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/google-containers/addon-resizer  | functional-620822  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | ac1c61439df46 | 65.9MB |
| docker.io/library/nginx                 | latest             | dde0cca083bc7 | 192MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.30.2            | 7820c83aa1394 | 63.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 56ce0fd9fb532 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e874818b3caac | 112MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-620822 image ls --format table --alsologtostderr:
I0621 18:26:37.843315   29795 out.go:291] Setting OutFile to fd 1 ...
I0621 18:26:37.843615   29795 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:37.843627   29795 out.go:304] Setting ErrFile to fd 2...
I0621 18:26:37.843633   29795 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:37.843920   29795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
I0621 18:26:37.844736   29795 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:37.844887   29795 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:37.845448   29795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:37.845505   29795 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:37.861682   29795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
I0621 18:26:37.862142   29795 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:37.862753   29795 main.go:141] libmachine: Using API Version  1
I0621 18:26:37.862821   29795 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:37.863144   29795 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:37.863347   29795 main.go:141] libmachine: (functional-620822) Calling .GetState
I0621 18:26:37.865081   29795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:37.865122   29795 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:37.879846   29795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
I0621 18:26:37.880297   29795 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:37.880747   29795 main.go:141] libmachine: Using API Version  1
I0621 18:26:37.880760   29795 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:37.881126   29795 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:37.881364   29795 main.go:141] libmachine: (functional-620822) Calling .DriverName
I0621 18:26:37.881660   29795 ssh_runner.go:195] Run: systemctl --version
I0621 18:26:37.881694   29795 main.go:141] libmachine: (functional-620822) Calling .GetSSHHostname
I0621 18:26:37.884915   29795 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:37.885176   29795 main.go:141] libmachine: (functional-620822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:10:a6", ip: ""} in network mk-functional-620822: {Iface:virbr1 ExpiryTime:2024-06-21 19:22:45 +0000 UTC Type:0 Mac:52:54:00:fb:10:a6 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-620822 Clientid:01:52:54:00:fb:10:a6}
I0621 18:26:37.885206   29795 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined IP address 192.168.39.117 and MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:37.885350   29795 main.go:141] libmachine: (functional-620822) Calling .GetSSHPort
I0621 18:26:37.885507   29795 main.go:141] libmachine: (functional-620822) Calling .GetSSHKeyPath
I0621 18:26:37.885656   29795 main.go:141] libmachine: (functional-620822) Calling .GetSSHUsername
I0621 18:26:37.885773   29795 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/functional-620822/id_rsa Username:docker}
I0621 18:26:37.992860   29795 ssh_runner.go:195] Run: sudo crictl images --output json
I0621 18:26:38.077255   29795 main.go:141] libmachine: Making call to close driver server
I0621 18:26:38.077272   29795 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:38.077537   29795 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:38.077550   29795 main.go:141] libmachine: Making call to close connection to plugin binary
I0621 18:26:38.077556   29795 main.go:141] libmachine: Making call to close driver server
I0621 18:26:38.077562   29795 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:38.077836   29795 main.go:141] libmachine: (functional-620822) DBG | Closing plugin on server side
I0621 18:26:38.077872   29795 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:38.077885   29795 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-620822 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-620822"],"size":"34114467"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"85953433"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcdd
effadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"2a633b151304086989bc4c3e79af05d1f939c15e44d25884d4013a61b7996af9","repoDigests":["localhost/miniku
be-local-cache-test@sha256:de01c11aed711c948c93770a470af492dfed202df28b104d43b1f9a9cb957bbc"],"repoTags":["localhost/minikube-local-cache-test:functional-620822"],"size":"3330"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b
2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117609954"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"dde0cca083bc75a0af14262b1469b5141284b4399a62fef923ec0c0e3b21f5bc","repoDigests":["docker.io/library/nginx@sha256:56b388b0d79c738f4cf51bbaf184a14fab19337f4819ceb2cae7d94100262de8","docker.io/library/nginx@sha256:dca6c1f16ab4ac041e55a10ad840e6609a953e1b2ee1ec3e4d3dfe2b4dfbbf34"],"repoTags":["docker.io/library/nginx:latest"]
,"size":"191814145"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"63051080"},{"id":"ac
1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"65908273"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433
eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"112194888"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-620822 image ls --format json --alsologtostderr:
I0621 18:26:37.807441   29784 out.go:291] Setting OutFile to fd 1 ...
I0621 18:26:37.807672   29784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:37.807681   29784 out.go:304] Setting ErrFile to fd 2...
I0621 18:26:37.807685   29784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:37.807878   29784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
I0621 18:26:37.810917   29784 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:37.811156   29784 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:37.812112   29784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:37.812179   29784 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:37.827903   29784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
I0621 18:26:37.828494   29784 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:37.829198   29784 main.go:141] libmachine: Using API Version  1
I0621 18:26:37.829225   29784 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:37.829749   29784 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:37.829956   29784 main.go:141] libmachine: (functional-620822) Calling .GetState
I0621 18:26:37.831571   29784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:37.831606   29784 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:37.848085   29784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
I0621 18:26:37.848549   29784 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:37.849124   29784 main.go:141] libmachine: Using API Version  1
I0621 18:26:37.849171   29784 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:37.849472   29784 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:37.849640   29784 main.go:141] libmachine: (functional-620822) Calling .DriverName
I0621 18:26:37.849834   29784 ssh_runner.go:195] Run: systemctl --version
I0621 18:26:37.849858   29784 main.go:141] libmachine: (functional-620822) Calling .GetSSHHostname
I0621 18:26:37.852658   29784 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:37.852999   29784 main.go:141] libmachine: (functional-620822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:10:a6", ip: ""} in network mk-functional-620822: {Iface:virbr1 ExpiryTime:2024-06-21 19:22:45 +0000 UTC Type:0 Mac:52:54:00:fb:10:a6 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-620822 Clientid:01:52:54:00:fb:10:a6}
I0621 18:26:37.853029   29784 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined IP address 192.168.39.117 and MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:37.853189   29784 main.go:141] libmachine: (functional-620822) Calling .GetSSHPort
I0621 18:26:37.853356   29784 main.go:141] libmachine: (functional-620822) Calling .GetSSHKeyPath
I0621 18:26:37.853514   29784 main.go:141] libmachine: (functional-620822) Calling .GetSSHUsername
I0621 18:26:37.853667   29784 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/functional-620822/id_rsa Username:docker}
I0621 18:26:37.968212   29784 ssh_runner.go:195] Run: sudo crictl images --output json
I0621 18:26:38.045020   29784 main.go:141] libmachine: Making call to close driver server
I0621 18:26:38.045037   29784 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:38.045378   29784 main.go:141] libmachine: (functional-620822) DBG | Closing plugin on server side
I0621 18:26:38.045415   29784 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:38.045436   29784 main.go:141] libmachine: Making call to close connection to plugin binary
I0621 18:26:38.045451   29784 main.go:141] libmachine: Making call to close driver server
I0621 18:26:38.045463   29784 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:38.045667   29784 main.go:141] libmachine: (functional-620822) DBG | Closing plugin on server side
I0621 18:26:38.045697   29784 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:38.045707   29784 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-620822 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "65908273"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "85953433"
- id: dde0cca083bc75a0af14262b1469b5141284b4399a62fef923ec0c0e3b21f5bc
repoDigests:
- docker.io/library/nginx@sha256:56b388b0d79c738f4cf51bbaf184a14fab19337f4819ceb2cae7d94100262de8
- docker.io/library/nginx@sha256:dca6c1f16ab4ac041e55a10ad840e6609a953e1b2ee1ec3e4d3dfe2b4dfbbf34
repoTags:
- docker.io/library/nginx:latest
size: "191814145"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117609954"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "112194888"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "63051080"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-620822
size: "34114467"
- id: 2a633b151304086989bc4c3e79af05d1f939c15e44d25884d4013a61b7996af9
repoDigests:
- localhost/minikube-local-cache-test@sha256:de01c11aed711c948c93770a470af492dfed202df28b104d43b1f9a9cb957bbc
repoTags:
- localhost/minikube-local-cache-test:functional-620822
size: "3330"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-620822 image ls --format yaml --alsologtostderr:
I0621 18:26:37.552528   29729 out.go:291] Setting OutFile to fd 1 ...
I0621 18:26:37.552771   29729 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:37.552780   29729 out.go:304] Setting ErrFile to fd 2...
I0621 18:26:37.552786   29729 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:37.552952   29729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
I0621 18:26:37.553495   29729 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:37.553615   29729 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:37.554048   29729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:37.554101   29729 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:37.569073   29729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45301
I0621 18:26:37.569664   29729 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:37.570229   29729 main.go:141] libmachine: Using API Version  1
I0621 18:26:37.570250   29729 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:37.570583   29729 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:37.570779   29729 main.go:141] libmachine: (functional-620822) Calling .GetState
I0621 18:26:37.572554   29729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:37.572593   29729 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:37.589474   29729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42129
I0621 18:26:37.589862   29729 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:37.590333   29729 main.go:141] libmachine: Using API Version  1
I0621 18:26:37.590353   29729 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:37.590643   29729 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:37.590775   29729 main.go:141] libmachine: (functional-620822) Calling .DriverName
I0621 18:26:37.590966   29729 ssh_runner.go:195] Run: systemctl --version
I0621 18:26:37.590994   29729 main.go:141] libmachine: (functional-620822) Calling .GetSSHHostname
I0621 18:26:37.594337   29729 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:37.594807   29729 main.go:141] libmachine: (functional-620822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:10:a6", ip: ""} in network mk-functional-620822: {Iface:virbr1 ExpiryTime:2024-06-21 19:22:45 +0000 UTC Type:0 Mac:52:54:00:fb:10:a6 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-620822 Clientid:01:52:54:00:fb:10:a6}
I0621 18:26:37.594883   29729 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined IP address 192.168.39.117 and MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:37.595081   29729 main.go:141] libmachine: (functional-620822) Calling .GetSSHPort
I0621 18:26:37.595267   29729 main.go:141] libmachine: (functional-620822) Calling .GetSSHKeyPath
I0621 18:26:37.595410   29729 main.go:141] libmachine: (functional-620822) Calling .GetSSHUsername
I0621 18:26:37.595525   29729 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/functional-620822/id_rsa Username:docker}
I0621 18:26:37.683969   29729 ssh_runner.go:195] Run: sudo crictl images --output json
I0621 18:26:37.751054   29729 main.go:141] libmachine: Making call to close driver server
I0621 18:26:37.751076   29729 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:37.751348   29729 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:37.751360   29729 main.go:141] libmachine: (functional-620822) DBG | Closing plugin on server side
I0621 18:26:37.751369   29729 main.go:141] libmachine: Making call to close connection to plugin binary
I0621 18:26:37.751378   29729 main.go:141] libmachine: Making call to close driver server
I0621 18:26:37.751385   29729 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:37.751613   29729 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:37.751628   29729 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-620822 ssh pgrep buildkitd: exit status 1 (192.817287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image build -t localhost/my-image:functional-620822 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 image build -t localhost/my-image:functional-620822 testdata/build --alsologtostderr: (2.705511164s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-620822 image build -t localhost/my-image:functional-620822 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 34d2dcff4b3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-620822
--> 5cebc7d42cf
Successfully tagged localhost/my-image:functional-620822
5cebc7d42cfa4c9d377b46671c9bb0979f25247bb1c21c52007a300048df51a7
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-620822 image build -t localhost/my-image:functional-620822 testdata/build --alsologtostderr:
I0621 18:26:38.281650   29865 out.go:291] Setting OutFile to fd 1 ...
I0621 18:26:38.281828   29865 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:38.281882   29865 out.go:304] Setting ErrFile to fd 2...
I0621 18:26:38.281900   29865 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0621 18:26:38.282300   29865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
I0621 18:26:38.282908   29865 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:38.283472   29865 config.go:182] Loaded profile config "functional-620822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0621 18:26:38.283844   29865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:38.283923   29865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:38.298727   29865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
I0621 18:26:38.299199   29865 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:38.299778   29865 main.go:141] libmachine: Using API Version  1
I0621 18:26:38.299803   29865 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:38.300146   29865 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:38.300352   29865 main.go:141] libmachine: (functional-620822) Calling .GetState
I0621 18:26:38.302147   29865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0621 18:26:38.302183   29865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0621 18:26:38.316379   29865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
I0621 18:26:38.316777   29865 main.go:141] libmachine: () Calling .GetVersion
I0621 18:26:38.317226   29865 main.go:141] libmachine: Using API Version  1
I0621 18:26:38.317240   29865 main.go:141] libmachine: () Calling .SetConfigRaw
I0621 18:26:38.317540   29865 main.go:141] libmachine: () Calling .GetMachineName
I0621 18:26:38.317718   29865 main.go:141] libmachine: (functional-620822) Calling .DriverName
I0621 18:26:38.317958   29865 ssh_runner.go:195] Run: systemctl --version
I0621 18:26:38.317985   29865 main.go:141] libmachine: (functional-620822) Calling .GetSSHHostname
I0621 18:26:38.320568   29865 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:38.320961   29865 main.go:141] libmachine: (functional-620822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:10:a6", ip: ""} in network mk-functional-620822: {Iface:virbr1 ExpiryTime:2024-06-21 19:22:45 +0000 UTC Type:0 Mac:52:54:00:fb:10:a6 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-620822 Clientid:01:52:54:00:fb:10:a6}
I0621 18:26:38.320992   29865 main.go:141] libmachine: (functional-620822) DBG | domain functional-620822 has defined IP address 192.168.39.117 and MAC address 52:54:00:fb:10:a6 in network mk-functional-620822
I0621 18:26:38.321155   29865 main.go:141] libmachine: (functional-620822) Calling .GetSSHPort
I0621 18:26:38.321331   29865 main.go:141] libmachine: (functional-620822) Calling .GetSSHKeyPath
I0621 18:26:38.321513   29865 main.go:141] libmachine: (functional-620822) Calling .GetSSHUsername
I0621 18:26:38.321656   29865 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/functional-620822/id_rsa Username:docker}
I0621 18:26:38.401032   29865 build_images.go:161] Building image from path: /tmp/build.555838259.tar
I0621 18:26:38.401121   29865 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0621 18:26:38.411814   29865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.555838259.tar
I0621 18:26:38.416112   29865 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.555838259.tar: stat -c "%s %y" /var/lib/minikube/build/build.555838259.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.555838259.tar': No such file or directory
I0621 18:26:38.416144   29865 ssh_runner.go:362] scp /tmp/build.555838259.tar --> /var/lib/minikube/build/build.555838259.tar (3072 bytes)
I0621 18:26:38.443438   29865 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.555838259
I0621 18:26:38.452749   29865 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.555838259 -xf /var/lib/minikube/build/build.555838259.tar
I0621 18:26:38.461779   29865 crio.go:315] Building image: /var/lib/minikube/build/build.555838259
I0621 18:26:38.461859   29865 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-620822 /var/lib/minikube/build/build.555838259 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0621 18:26:40.919814   29865 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-620822 /var/lib/minikube/build/build.555838259 --cgroup-manager=cgroupfs: (2.457899317s)
I0621 18:26:40.919897   29865 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.555838259
I0621 18:26:40.931727   29865 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.555838259.tar
I0621 18:26:40.943449   29865 build_images.go:217] Built localhost/my-image:functional-620822 from /tmp/build.555838259.tar
I0621 18:26:40.943482   29865 build_images.go:133] succeeded building to: functional-620822
I0621 18:26:40.943486   29865 build_images.go:134] failed building to: 
I0621 18:26:40.943511   29865 main.go:141] libmachine: Making call to close driver server
I0621 18:26:40.943526   29865 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:40.943824   29865 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:40.943841   29865 main.go:141] libmachine: Making call to close connection to plugin binary
I0621 18:26:40.943851   29865 main.go:141] libmachine: Making call to close driver server
I0621 18:26:40.943859   29865 main.go:141] libmachine: (functional-620822) Calling .Close
I0621 18:26:40.944139   29865 main.go:141] libmachine: Successfully made call to close driver server
I0621 18:26:40.944162   29865 main.go:141] libmachine: Making call to close connection to plugin binary
I0621 18:26:40.944141   29865 main.go:141] libmachine: (functional-620822) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.154203209s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-620822
2024/06/21 18:26:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image load --daemon gcr.io/google-containers/addon-resizer:functional-620822 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 image load --daemon gcr.io/google-containers/addon-resizer:functional-620822 --alsologtostderr: (3.809592483s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image load --daemon gcr.io/google-containers/addon-resizer:functional-620822 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 image load --daemon gcr.io/google-containers/addon-resizer:functional-620822 --alsologtostderr: (4.532585172s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.79714763s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-620822
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image load --daemon gcr.io/google-containers/addon-resizer:functional-620822 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 image load --daemon gcr.io/google-containers/addon-resizer:functional-620822 --alsologtostderr: (10.707390725s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image save gcr.io/google-containers/addon-resizer:functional-620822 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 image save gcr.io/google-containers/addon-resizer:functional-620822 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.916602731s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image rm gcr.io/google-containers/addon-resizer:functional-620822 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-620822 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.391597811s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-620822
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-620822 image save --daemon gcr.io/google-containers/addon-resizer:functional-620822 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-620822
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-620822
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-620822
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-620822
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-406291 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-946343 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-946343 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.595752013s)
--- PASS: TestJSONOutput/start/Command (95.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-946343 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-946343 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-946343 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-946343 --output=json --user=testUser: (7.339695078s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-445916 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-445916 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.962287ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"08554c97-12b9-4499-a31d-7076567f47cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-445916] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7706e64-d254-4162-a055-18c2f2c10193","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19112"}}
	{"specversion":"1.0","id":"b82f0db9-ab21-4e8d-aae6-c71f13561f8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0fb82c49-457d-45bf-946f-5be55028878f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig"}}
	{"specversion":"1.0","id":"1a586ec9-eb84-45b4-8ecb-15e9e6a4b2ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube"}}
	{"specversion":"1.0","id":"e69f139e-2886-4013-8735-d4750ad6bae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2e067d75-0d5e-4238-8911-921c32c8930c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7bcb4f66-610b-4fb8-a38e-2b40e5c6c53c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-445916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-445916
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-000420 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-000420 --driver=kvm2  --container-runtime=crio: (42.313450646s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-003596 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-003596 --driver=kvm2  --container-runtime=crio: (42.57741728s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-000420
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-003596
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-003596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-003596
helpers_test.go:175: Cleaning up "first-000420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-000420
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-000420: (1.037114963s)
--- PASS: TestMinikubeProfile (87.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-345488 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-345488 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.826225266s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-345488 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-345488 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-357543 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0621 19:00:54.862174   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-357543 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.848892939s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357543 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357543 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-345488 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357543 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357543 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-357543
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-357543: (1.274521788s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-357543
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-357543: (21.609673342s)
--- PASS: TestMountStart/serial/RestartStopped (22.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357543 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357543 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851952 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851952 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m35.929136936s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-851952 -- rollout status deployment/busybox: (4.104812901s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-lsrx2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-rwq2d -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-lsrx2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-rwq2d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-lsrx2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-rwq2d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-lsrx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-lsrx2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-rwq2d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851952 -- exec busybox-fc5497c4f-rwq2d -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (36.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-851952 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-851952 -v 3 --alsologtostderr: (35.721244454s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (36.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-851952 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp testdata/cp-test.txt multinode-851952:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp multinode-851952:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile293116882/001/cp-test_multinode-851952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp multinode-851952:/home/docker/cp-test.txt multinode-851952-m02:/home/docker/cp-test_multinode-851952_multinode-851952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m02 "sudo cat /home/docker/cp-test_multinode-851952_multinode-851952-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp multinode-851952:/home/docker/cp-test.txt multinode-851952-m03:/home/docker/cp-test_multinode-851952_multinode-851952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m03 "sudo cat /home/docker/cp-test_multinode-851952_multinode-851952-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp testdata/cp-test.txt multinode-851952-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp multinode-851952-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile293116882/001/cp-test_multinode-851952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp multinode-851952-m02:/home/docker/cp-test.txt multinode-851952:/home/docker/cp-test_multinode-851952-m02_multinode-851952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952 "sudo cat /home/docker/cp-test_multinode-851952-m02_multinode-851952.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp multinode-851952-m02:/home/docker/cp-test.txt multinode-851952-m03:/home/docker/cp-test_multinode-851952-m02_multinode-851952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m03 "sudo cat /home/docker/cp-test_multinode-851952-m02_multinode-851952-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp testdata/cp-test.txt multinode-851952-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp multinode-851952-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile293116882/001/cp-test_multinode-851952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp multinode-851952-m03:/home/docker/cp-test.txt multinode-851952:/home/docker/cp-test_multinode-851952-m03_multinode-851952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952 "sudo cat /home/docker/cp-test_multinode-851952-m03_multinode-851952.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 cp multinode-851952-m03:/home/docker/cp-test.txt multinode-851952-m02:/home/docker/cp-test_multinode-851952-m03_multinode-851952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 ssh -n multinode-851952-m02 "sudo cat /home/docker/cp-test_multinode-851952-m03_multinode-851952-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-851952 node stop m03: (1.385300502s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851952 status: exit status 7 (399.611492ms)

                                                
                                                
-- stdout --
	multinode-851952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-851952-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-851952-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851952 status --alsologtostderr: exit status 7 (413.764585ms)

                                                
                                                
-- stdout --
	multinode-851952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-851952-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-851952-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0621 19:03:50.522403   45933 out.go:291] Setting OutFile to fd 1 ...
	I0621 19:03:50.522670   45933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:03:50.522682   45933 out.go:304] Setting ErrFile to fd 2...
	I0621 19:03:50.522686   45933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0621 19:03:50.522923   45933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19112-8111/.minikube/bin
	I0621 19:03:50.523136   45933 out.go:298] Setting JSON to false
	I0621 19:03:50.523162   45933 mustload.go:65] Loading cluster: multinode-851952
	I0621 19:03:50.523269   45933 notify.go:220] Checking for updates...
	I0621 19:03:50.523616   45933 config.go:182] Loaded profile config "multinode-851952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0621 19:03:50.523632   45933 status.go:255] checking status of multinode-851952 ...
	I0621 19:03:50.524075   45933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:03:50.524135   45933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:03:50.542916   45933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0621 19:03:50.543341   45933 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:03:50.543878   45933 main.go:141] libmachine: Using API Version  1
	I0621 19:03:50.543905   45933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:03:50.544203   45933 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:03:50.544439   45933 main.go:141] libmachine: (multinode-851952) Calling .GetState
	I0621 19:03:50.545765   45933 status.go:330] multinode-851952 host status = "Running" (err=<nil>)
	I0621 19:03:50.545780   45933 host.go:66] Checking if "multinode-851952" exists ...
	I0621 19:03:50.546188   45933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:03:50.546227   45933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:03:50.561432   45933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0621 19:03:50.561828   45933 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:03:50.562302   45933 main.go:141] libmachine: Using API Version  1
	I0621 19:03:50.562323   45933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:03:50.562621   45933 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:03:50.562831   45933 main.go:141] libmachine: (multinode-851952) Calling .GetIP
	I0621 19:03:50.565447   45933 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:03:50.565912   45933 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:03:50.565940   45933 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:03:50.566106   45933 host.go:66] Checking if "multinode-851952" exists ...
	I0621 19:03:50.566418   45933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:03:50.566450   45933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:03:50.581476   45933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0621 19:03:50.582008   45933 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:03:50.582575   45933 main.go:141] libmachine: Using API Version  1
	I0621 19:03:50.582600   45933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:03:50.582896   45933 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:03:50.583050   45933 main.go:141] libmachine: (multinode-851952) Calling .DriverName
	I0621 19:03:50.583293   45933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 19:03:50.583320   45933 main.go:141] libmachine: (multinode-851952) Calling .GetSSHHostname
	I0621 19:03:50.586318   45933 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:03:50.586742   45933 main.go:141] libmachine: (multinode-851952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:b9:c8", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:01:36 +0000 UTC Type:0 Mac:52:54:00:af:b9:c8 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-851952 Clientid:01:52:54:00:af:b9:c8}
	I0621 19:03:50.586768   45933 main.go:141] libmachine: (multinode-851952) DBG | domain multinode-851952 has defined IP address 192.168.39.146 and MAC address 52:54:00:af:b9:c8 in network mk-multinode-851952
	I0621 19:03:50.586951   45933 main.go:141] libmachine: (multinode-851952) Calling .GetSSHPort
	I0621 19:03:50.587119   45933 main.go:141] libmachine: (multinode-851952) Calling .GetSSHKeyPath
	I0621 19:03:50.587306   45933 main.go:141] libmachine: (multinode-851952) Calling .GetSSHUsername
	I0621 19:03:50.587441   45933 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952/id_rsa Username:docker}
	I0621 19:03:50.669558   45933 ssh_runner.go:195] Run: systemctl --version
	I0621 19:03:50.675400   45933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 19:03:50.689558   45933 kubeconfig.go:125] found "multinode-851952" server: "https://192.168.39.146:8443"
	I0621 19:03:50.689586   45933 api_server.go:166] Checking apiserver status ...
	I0621 19:03:50.689621   45933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0621 19:03:50.703240   45933 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup
	W0621 19:03:50.712512   45933 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0621 19:03:50.712569   45933 ssh_runner.go:195] Run: ls
	I0621 19:03:50.716791   45933 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0621 19:03:50.721188   45933 api_server.go:279] https://192.168.39.146:8443/healthz returned 200:
	ok
	I0621 19:03:50.721212   45933 status.go:422] multinode-851952 apiserver status = Running (err=<nil>)
	I0621 19:03:50.721224   45933 status.go:257] multinode-851952 status: &{Name:multinode-851952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0621 19:03:50.721239   45933 status.go:255] checking status of multinode-851952-m02 ...
	I0621 19:03:50.721517   45933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:03:50.721546   45933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:03:50.736643   45933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0621 19:03:50.737064   45933 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:03:50.737529   45933 main.go:141] libmachine: Using API Version  1
	I0621 19:03:50.737545   45933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:03:50.737881   45933 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:03:50.738085   45933 main.go:141] libmachine: (multinode-851952-m02) Calling .GetState
	I0621 19:03:50.739718   45933 status.go:330] multinode-851952-m02 host status = "Running" (err=<nil>)
	I0621 19:03:50.739735   45933 host.go:66] Checking if "multinode-851952-m02" exists ...
	I0621 19:03:50.740041   45933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:03:50.740083   45933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:03:50.754883   45933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35507
	I0621 19:03:50.755349   45933 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:03:50.755802   45933 main.go:141] libmachine: Using API Version  1
	I0621 19:03:50.755821   45933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:03:50.756150   45933 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:03:50.756356   45933 main.go:141] libmachine: (multinode-851952-m02) Calling .GetIP
	I0621 19:03:50.759401   45933 main.go:141] libmachine: (multinode-851952-m02) DBG | domain multinode-851952-m02 has defined MAC address 52:54:00:fa:b4:36 in network mk-multinode-851952
	I0621 19:03:50.759833   45933 main.go:141] libmachine: (multinode-851952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:b4:36", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:02:33 +0000 UTC Type:0 Mac:52:54:00:fa:b4:36 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-851952-m02 Clientid:01:52:54:00:fa:b4:36}
	I0621 19:03:50.759851   45933 main.go:141] libmachine: (multinode-851952-m02) DBG | domain multinode-851952-m02 has defined IP address 192.168.39.172 and MAC address 52:54:00:fa:b4:36 in network mk-multinode-851952
	I0621 19:03:50.760079   45933 host.go:66] Checking if "multinode-851952-m02" exists ...
	I0621 19:03:50.760395   45933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:03:50.760429   45933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:03:50.775375   45933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37453
	I0621 19:03:50.775805   45933 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:03:50.776277   45933 main.go:141] libmachine: Using API Version  1
	I0621 19:03:50.776298   45933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:03:50.776590   45933 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:03:50.776759   45933 main.go:141] libmachine: (multinode-851952-m02) Calling .DriverName
	I0621 19:03:50.776939   45933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0621 19:03:50.776960   45933 main.go:141] libmachine: (multinode-851952-m02) Calling .GetSSHHostname
	I0621 19:03:50.779783   45933 main.go:141] libmachine: (multinode-851952-m02) DBG | domain multinode-851952-m02 has defined MAC address 52:54:00:fa:b4:36 in network mk-multinode-851952
	I0621 19:03:50.780213   45933 main.go:141] libmachine: (multinode-851952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:b4:36", ip: ""} in network mk-multinode-851952: {Iface:virbr1 ExpiryTime:2024-06-21 20:02:33 +0000 UTC Type:0 Mac:52:54:00:fa:b4:36 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-851952-m02 Clientid:01:52:54:00:fa:b4:36}
	I0621 19:03:50.780249   45933 main.go:141] libmachine: (multinode-851952-m02) DBG | domain multinode-851952-m02 has defined IP address 192.168.39.172 and MAC address 52:54:00:fa:b4:36 in network mk-multinode-851952
	I0621 19:03:50.780372   45933 main.go:141] libmachine: (multinode-851952-m02) Calling .GetSSHPort
	I0621 19:03:50.780550   45933 main.go:141] libmachine: (multinode-851952-m02) Calling .GetSSHKeyPath
	I0621 19:03:50.780674   45933 main.go:141] libmachine: (multinode-851952-m02) Calling .GetSSHUsername
	I0621 19:03:50.780811   45933 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19112-8111/.minikube/machines/multinode-851952-m02/id_rsa Username:docker}
	I0621 19:03:50.861384   45933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0621 19:03:50.875382   45933 status.go:257] multinode-851952-m02 status: &{Name:multinode-851952-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0621 19:03:50.875408   45933 status.go:255] checking status of multinode-851952-m03 ...
	I0621 19:03:50.875709   45933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0621 19:03:50.875745   45933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0621 19:03:50.891559   45933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I0621 19:03:50.891995   45933 main.go:141] libmachine: () Calling .GetVersion
	I0621 19:03:50.892439   45933 main.go:141] libmachine: Using API Version  1
	I0621 19:03:50.892462   45933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0621 19:03:50.892841   45933 main.go:141] libmachine: () Calling .GetMachineName
	I0621 19:03:50.893034   45933 main.go:141] libmachine: (multinode-851952-m03) Calling .GetState
	I0621 19:03:50.894676   45933 status.go:330] multinode-851952-m03 host status = "Stopped" (err=<nil>)
	I0621 19:03:50.894690   45933 status.go:343] host is not running, skipping remaining checks
	I0621 19:03:50.894696   45933 status.go:257] multinode-851952-m03 status: &{Name:multinode-851952-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 node start m03 -v=7 --alsologtostderr
E0621 19:03:57.911445   15329 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19112-8111/.minikube/profiles/functional-620822/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-851952 node start m03 -v=7 --alsologtostderr: (26.564443907s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-851952 node delete m03: (1.553471828s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (167.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851952 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851952 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m47.007683369s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851952 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (167.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-851952
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851952-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-851952-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.791387ms)

                                                
                                                
-- stdout --
	* [multinode-851952-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-851952-m02' is duplicated with machine name 'multinode-851952-m02' in profile 'multinode-851952'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851952-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851952-m03 --driver=kvm2  --container-runtime=crio: (40.661331568s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-851952
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-851952: exit status 80 (204.819948ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-851952 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-851952-m03 already exists in multinode-851952-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-851952-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.75s)

                                                
                                    
x
+
TestScheduledStopUnix (109.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-787057 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-787057 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.273834455s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-787057 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-787057 -n scheduled-stop-787057
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-787057 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-787057 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-787057 -n scheduled-stop-787057
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-787057
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-787057 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-787057
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-787057: exit status 7 (61.878918ms)

                                                
                                                
-- stdout --
	scheduled-stop-787057
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-787057 -n scheduled-stop-787057
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-787057 -n scheduled-stop-787057: exit status 7 (62.67522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-787057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-787057
--- PASS: TestScheduledStopUnix (109.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (227s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4152179682 start -p running-upgrade-313770 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4152179682 start -p running-upgrade-313770 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m2.207573036s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-313770 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-313770 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m41.040946455s)
helpers_test.go:175: Cleaning up "running-upgrade-313770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-313770
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-313770: (1.231640227s)
--- PASS: TestRunningBinaryUpgrade (227.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-262372 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-262372 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (68.893101ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-262372] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19112-8111/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19112-8111/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (71.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-262372 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-262372 --driver=kvm2  --container-runtime=crio: (1m11.724484395s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-262372 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (71.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (65.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-262372 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-262372 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m4.129964655s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-262372 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-262372 status -o json: exit status 2 (254.460729ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-262372","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-262372
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (65.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-262372 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-262372 --no-kubernetes --driver=kvm2  --container-runtime=crio: (49.697960839s)
--- PASS: TestNoKubernetes/serial/Start (49.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-262372 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-262372 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.605356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.597199724s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.593386794s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-262372
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-262372: (2.704677758s)
--- PASS: TestNoKubernetes/serial/Stop (2.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-262372 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-262372 --driver=kvm2  --container-runtime=crio: (21.402134286s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.40s)

                                                
                                    
x
+
TestPause/serial/Start (105.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-709611 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-709611 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.230957465s)
--- PASS: TestPause/serial/Start (105.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-262372 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-262372 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.850363ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (135.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2253667907 start -p stopped-upgrade-693942 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2253667907 start -p stopped-upgrade-693942 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m29.111774829s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2253667907 -p stopped-upgrade-693942 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2253667907 -p stopped-upgrade-693942 stop: (1.397103918s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-693942 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-693942 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.576525s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (135.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-693942
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    

Test skip (32/203)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard